1. Introduction 1. 介绍
Proxmox VE is a platform to run virtual machines and containers. It is
based on Debian Linux, and completely open source. For maximum
flexibility, we implemented two virtualization technologies -
Kernel-based Virtual Machine (KVM) and container-based virtualization
(LXC).
Proxmox VE 是一个运行虚拟机和容器的平台。它基于 Debian Linux,完全开源。为了实现最大的灵活性,我们实现了两种虚拟化技术——基于内核的虚拟机(KVM)和基于容器的虚拟化(LXC)。
One main design goal was to make administration as easy as
possible. You can use Proxmox VE on a single node, or assemble a cluster of
many nodes. All management tasks can be done using our web-based
management interface, and even a novice user can setup and install
Proxmox VE within minutes.
一个主要的设计目标是使管理尽可能简单。您可以在单个节点上使用 Proxmox VE,或者组建由多个节点组成的集群。所有管理任务都可以通过我们的基于网页的管理界面完成,即使是新手用户也能在几分钟内设置和安装 Proxmox VE。
1.1. Central Management 1.1. 集中管理
While many people start with a single node, Proxmox VE can scale out to a
large set of clustered nodes. The cluster stack is fully integrated
and ships with the default installation.
虽然许多人从单个节点开始,但 Proxmox VE 可以扩展到大量集群节点。集群堆栈完全集成,并随默认安装一起提供。
-
Unique Multi-Master Design
独特的多主设计 -
The integrated web-based management interface gives you a clean overview of all your KVM guests and Linux containers and even of your whole cluster. You can easily manage your VMs and containers, storage or cluster from the GUI. There is no need to install a separate, complex, and pricey management server.
集成的基于网页的管理界面为您提供所有 KVM 虚拟机和 Linux 容器,甚至整个集群的清晰概览。您可以轻松地通过 GUI 管理虚拟机和容器、存储或集群。无需安装单独的、复杂且昂贵的管理服务器。 -
Proxmox Cluster File System (pmxcfs)
Proxmox 集群文件系统(pmxcfs) -
Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a database-driven file system for storing configuration files. This enables you to store the configuration of thousands of virtual machines. By using corosync, these files are replicated in real time on all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides in RAM which provides a maximum storage size of 30MB - more than enough for thousands of VMs.
Proxmox VE 使用独特的 Proxmox 集群文件系统(pmxcfs),这是一种基于数据库的文件系统,用于存储配置文件。它使您能够存储数千台虚拟机的配置。通过使用 corosync,这些文件会在所有集群节点上实时复制。该文件系统将所有数据存储在磁盘上的持久数据库中,尽管如此,数据的副本仍驻留在内存中,最大存储容量为 30MB——足以容纳数千台虚拟机。Proxmox VE is the only virtualization platform using this unique cluster file system.
Proxmox VE 是唯一使用这种独特集群文件系统的虚拟化平台。 -
Web-based Management Interface
基于网页的管理界面 -
Proxmox VE is simple to use. Management tasks can be done via the included web based management interface - there is no need to install a separate management tool or any additional management node with huge databases. The multi-master tool allows you to manage your whole cluster from any node of your cluster. The central web-based management - based on the JavaScript Framework (ExtJS) - empowers you to control all functionalities from the GUI and overview history and syslogs of each single node. This includes running backup or restore jobs, live-migration or HA triggered activities.
Proxmox VE 使用简单。管理任务可以通过内置的基于网页的管理界面完成——无需安装单独的管理工具或任何带有庞大数据库的额外管理节点。多主工具允许您从集群中的任何节点管理整个集群。基于 JavaScript 框架(ExtJS)的中央网页管理界面,使您能够通过图形界面控制所有功能,并查看每个节点的历史记录和系统日志。这包括运行备份或恢复任务、实时迁移或高可用性触发的活动。 - Command Line 命令行
-
For advanced users who are used to the comfort of the Unix shell or Windows Powershell, Proxmox VE provides a command-line interface to manage all the components of your virtual environment. This command-line interface has intelligent tab completion and full documentation in the form of UNIX man pages.
对于习惯于 Unix shell 或 Windows Powershell 操作的高级用户,Proxmox VE 提供了一个命令行界面,用于管理虚拟环境的所有组件。该命令行界面支持智能制表符补全,并提供以 UNIX man 手册页形式的完整文档。 - REST API
-
Proxmox VE uses a RESTful API. We choose JSON as primary data format, and the whole API is formally defined using JSON Schema. This enables fast and easy integration for third party management tools like custom hosting environments.
Proxmox VE 使用 RESTful API。我们选择 JSON 作为主要数据格式,整个 API 通过 JSON Schema 正式定义。这使得第三方管理工具(如定制托管环境)能够快速且轻松地集成。 -
Role-based Administration
基于角色的管理 -
You can define granular access for all objects (like VMs, storages, nodes, etc.) by using the role based user- and permission management. This allows you to define privileges and helps you to control access to objects. This concept is also known as access control lists: Each permission specifies a subject (a user or group) and a role (set of privileges) on a specific path.
您可以通过基于角色的用户和权限管理,为所有对象(如虚拟机、存储、节点等)定义细粒度的访问权限。这使您能够定义特权,并帮助您控制对对象的访问。该概念也称为访问控制列表:每个权限指定一个主体(用户或组)和一个角色(特权集),作用于特定路径。 - Authentication Realms 认证域
-
Proxmox VE supports multiple authentication sources like Microsoft Active Directory, LDAP, Linux PAM standard authentication or the built-in Proxmox VE authentication server.
Proxmox VE 支持多种认证源,如 Microsoft Active Directory、LDAP、Linux PAM 标准认证或内置的 Proxmox VE 认证服务器。
1.2. Flexible Storage 1.2. 灵活存储
The Proxmox VE storage model is very flexible. Virtual machine images
can either be stored on one or several local storages or on shared
storage like NFS and on SAN. There are no limits, you may configure as
many storage definitions as you like. You can use all storage
technologies available for Debian Linux.
Proxmox VE 的存储模型非常灵活。虚拟机镜像可以存储在一个或多个本地存储上,也可以存储在共享存储上,如 NFS 和 SAN。没有限制,您可以配置任意数量的存储定义。您可以使用 Debian Linux 支持的所有存储技术。
One major benefit of storing VMs on shared storage is the ability to
live-migrate running machines without any downtime, as all nodes in
the cluster have direct access to VM disk images.
将虚拟机存储在共享存储上的一个主要好处是能够在不中断运行的情况下进行实时迁移,因为集群中的所有节点都可以直接访问虚拟机磁盘镜像。
We currently support the following Network storage types:
我们目前支持以下网络存储类型:
-
LVM Group (network backing with iSCSI targets)
LVM 组(使用 iSCSI 目标的网络支持) -
iSCSI target iSCSI 目标
-
NFS Share NFS 共享
-
CIFS Share CIFS 共享
-
Ceph RBD
-
Directly use iSCSI LUNs
直接使用 iSCSI LUN -
GlusterFS
Local storage types supported are:
支持的本地存储类型有:
-
LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
LVM 组(本地后端设备,如区块设备、光纤通道设备、DRBD 等) -
Directory (storage on existing filesystem)
目录(现有文件系统上的存储) -
ZFS
1.3. Integrated Backup and Restore
1.3. 集成备份与恢复
The integrated backup tool (vzdump) creates consistent snapshots of
running Containers and KVM guests. It basically creates an archive of
the VM or CT data which includes the VM/CT configuration files.
集成的备份工具(vzdump)可以创建正在运行的容器和 KVM 虚拟机的一致性快照。它基本上会创建一个包含虚拟机或容器数据的归档文件,其中包括虚拟机/容器的配置文件。
KVM live backup works for all storage types including VM images on
NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized for storing
VM backups fast and effective (sparse files, out of order data, minimized I/O).
KVM 在线备份适用于所有存储类型,包括存放在 NFS、CIFS、iSCSI LUN、Ceph RBD 上的虚拟机镜像。新的备份格式经过优化,能够快速高效地存储虚拟机备份(稀疏文件、无序数据、最小化 I/O)。
1.4. High Availability Cluster
1.4. 高可用性集群
A multi-node Proxmox VE HA Cluster enables the definition of highly
available virtual servers. The Proxmox VE HA Cluster is based on
proven Linux HA technologies, providing stable and reliable HA
services.
多节点 Proxmox VE 高可用性集群支持定义高可用虚拟服务器。Proxmox VE 高可用性集群基于成熟的 Linux 高可用技术,提供稳定可靠的高可用服务。
1.5. Flexible Networking
1.5. 灵活的网络配置
Proxmox VE uses a bridged networking model. All VMs can share one
bridge as if virtual network cables from each guest were all plugged
into the same switch. For connecting VMs to the outside world, bridges
are attached to physical network cards and assigned a TCP/IP
configuration.
Proxmox VE 使用桥接网络模型。所有虚拟机都可以共享一个桥,就像每个客户机的虚拟网线都插入同一个交换机一样。为了将虚拟机连接到外部网络,桥接会连接到物理网卡并分配 TCP/IP 配置。
For further flexibility, VLANs (IEEE 802.1q) and network
bonding/aggregation are possible. In this way it is possible to build
complex, flexible virtual networks for the Proxmox VE hosts,
leveraging the full power of the Linux network stack.
为了进一步的灵活性,支持 VLAN(IEEE 802.1q)和网络绑定/聚合。通过这种方式,可以为 Proxmox VE 主机构建复杂且灵活的虚拟网络,充分利用 Linux 网络栈的全部功能。
1.6. Integrated Firewall
1.6. 集成防火墙
The integrated firewall allows you to filter network packets on
any VM or Container interface. Common sets of firewall rules can
be grouped into “security groups”.
集成防火墙允许你在任何虚拟机或容器接口上过滤网络数据包。常用的防火墙规则集可以归组为“安全组”。
1.7. Hyper-converged Infrastructure
1.7. 超融合基础设施
Proxmox VE is a virtualization platform that tightly integrates compute, storage and
networking resources, manages highly available clusters, backup/restore as well
as disaster recovery. All components are software-defined and compatible with
one another.
Proxmox VE 是一个虚拟化平台,紧密集成了计算、存储和网络资源,管理高可用集群、备份/恢复以及灾难恢复。所有组件均为软件定义且彼此兼容。
Therefore it is possible to administrate them like a single system via the
centralized web management interface. These capabilities make Proxmox VE an ideal
choice to deploy and manage an open source
hyper-converged infrastructure.
因此,可以通过集中式的网页管理界面将它们作为单一系统进行管理。这些功能使 Proxmox VE 成为部署和管理开源超融合基础设施的理想选择。
1.7.1. Benefits of a Hyper-Converged Infrastructure (HCI) with Proxmox VE
1.7.1. 使用 Proxmox VE 的超融合基础设施(HCI)优势
A hyper-converged infrastructure (HCI) is especially useful for deployments in
which a high infrastructure demand meets a low administration budget, for
distributed setups such as remote and branch office environments or for virtual
private and public clouds.
超融合基础设施(HCI)特别适用于基础设施需求高而管理预算低的部署场景,如远程和分支机构环境的分布式设置,或虚拟私有云和公共云。
HCI provides the following advantages:
HCI 提供以下优势:
-
Scalability: seamless expansion of compute, network and storage devices (i.e. scale up servers and storage quickly and independently from each other).
可扩展性:计算、网络和存储设备的无缝扩展(即快速且相互独立地扩展服务器和存储)。 -
Low cost: Proxmox VE is open source and integrates all components you need such as compute, storage, networking, backup, and management center. It can replace an expensive compute/storage infrastructure.
低成本:Proxmox VE 是开源的,集成了计算、存储、网络、备份和管理中心等所有所需组件。它可以替代昂贵的计算/存储基础设施。 -
Data protection and efficiency: services such as backup and disaster recovery are integrated.
数据保护与效率:集成了备份和灾难恢复等服务。 -
Simplicity: easy configuration and centralized administration.
简易性:配置简单,集中管理。 -
Open Source: No vendor lock-in.
开源:无供应商锁定。
1.7.2. Hyper-Converged Infrastructure: Storage
1.7.2. 超融合基础设施:存储
Proxmox VE has tightly integrated support for deploying a hyper-converged storage
infrastructure. You can, for example, deploy and manage the following two
storage technologies by using the web interface only:
Proxmox VE 紧密集成了部署超融合存储基础设施的支持。例如,您可以仅通过网页界面部署和管理以下两种存储技术:
-
Ceph: a both self-healing and self-managing shared, reliable and highly scalable storage system. Checkout how to manage Ceph services on Proxmox VE nodes
Ceph:一种自愈且自管理的共享、可靠且高度可扩展的存储系统。查看如何在 Proxmox VE 节点上管理 Ceph 服务 -
ZFS: a combined file system and logical volume manager with extensive protection against data corruption, various RAID modes, fast and cheap snapshots - among other features. Find out how to leverage the power of ZFS on Proxmox VE nodes.
ZFS:一种结合了文件系统和逻辑卷管理器的技术,具有广泛的数据损坏保护、多种 RAID 模式、快速且廉价的快照等功能。了解如何在 Proxmox VE 节点上利用 ZFS 的强大功能。
Besides above, Proxmox VE has support to integrate a wide range of
additional storage technologies. You can find out about them in the
Storage Manager chapter.
除了上述内容,Proxmox VE 还支持集成多种额外的存储技术。您可以在存储管理章节中了解相关内容。
1.8. Why Open Source
1.8. 为什么选择开源
Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux
Distribution. The source code of Proxmox VE is released under the
GNU Affero General Public
License, version 3. This means that you are free to inspect the
source code at any time or contribute to the project yourself.
Proxmox VE 使用 Linux 内核,基于 Debian GNU/Linux 发行版。Proxmox VE 的源代码在 GNU Affero 通用公共许可证第 3 版下发布。这意味着您可以随时查看源代码或自行为项目做出贡献。
At Proxmox we are committed to use open source software whenever
possible. Using open source software guarantees full access to all
functionalities - as well as high security and reliability. We think
that everybody should have the right to access the source code of a
software to run it, build on it, or submit changes back to the
project. Everybody is encouraged to contribute while Proxmox ensures
the product always meets professional quality criteria.
在 Proxmox,我们致力于尽可能使用开源软件。使用开源软件保证了对所有功能的完全访问权限,同时也保证了高安全性和可靠性。我们认为每个人都应该有权访问软件的源代码,以运行软件、在其基础上进行构建,或将更改提交回项目。我们鼓励每个人贡献代码,同时 Proxmox 确保产品始终符合专业质量标准。
Open source software also helps to keep your costs low and makes your
core infrastructure independent from a single vendor.
开源软件还有助于降低您的成本,使您的核心基础设施不依赖于单一供应商。
1.9. Your benefits with Proxmox VE
1.9. 使用 Proxmox VE 的优势
-
Open source software 开源软件
-
No vendor lock-in 无供应商锁定
-
Linux kernel Linux 内核
-
Fast installation and easy-to-use
快速安装且易于使用 -
Web-based management interface
基于网页的管理界面 -
REST API
-
Huge active community 庞大的活跃社区
-
Low administration costs and simple deployment
低管理成本和简单部署
1.10. Getting Help 1.10. 获取帮助
1.10.1. Proxmox VE Wiki
1.10.1. Proxmox VE 维基
The primary source of information is the Proxmox VE Wiki. It combines the reference
documentation with user contributed content.
主要的信息来源是 Proxmox VE 维基。它结合了参考文档和用户贡献的内容。
1.10.2. Community Support Forum
1.10.2. 社区支持论坛
Proxmox VE itself is fully open source, so we always encourage our users to discuss
and share their knowledge using the Proxmox VE Community Forum. The forum is moderated by the
Proxmox support team, and has a large user base from all around the world.
Needless to say, such a large forum is a great place to get information.
Proxmox VE 本身是完全开源的,因此我们始终鼓励用户通过 Proxmox VE 社区论坛进行讨论和分享知识。该论坛由 Proxmox 支持团队进行管理,拥有来自世界各地的大量用户。毋庸置疑,这样一个大型论坛是获取信息的绝佳场所。
1.10.3. Mailing Lists 1.10.3. 邮件列表
This is a fast way to communicate with the Proxmox VE community via email.
这是通过电子邮件与 Proxmox VE 社区快速交流的一种方式。
-
Mailing list for users: Proxmox VE User List
用户邮件列表:Proxmox VE 用户列表
Proxmox VE is fully open source and contributions are welcome! The primary
communication channel for developers is the:
Proxmox VE 完全开源,欢迎贡献!开发者的主要交流渠道是:
-
Mailing list for developers: Proxmox VE development discussion
开发者邮件列表:Proxmox VE 开发讨论
1.10.4. Commercial Support
1.10.4. 商业支持
Proxmox Server Solutions GmbH also offers enterprise support available as
Proxmox VE Subscription Service Plans.
All users with a subscription get access to the Proxmox VE
Enterprise Repository, and—with a Basic, Standard
or Premium subscription—also to the Proxmox Customer Portal. The customer
portal provides help and support with guaranteed response times from the Proxmox VE
developers.
Proxmox Server Solutions GmbH 还提供企业支持,作为 Proxmox VE 订阅服务计划的一部分。所有订阅用户都可以访问 Proxmox VE 企业代码仓库,并且拥有基础、标准或高级订阅的用户还可以访问 Proxmox 客户门户。客户门户提供帮助和支持,并由 Proxmox VE 开发人员保证响应时间。
For volume discounts, or more information in general, please contact
sales@proxmox.com.
如需批量折扣或更多信息,请联系 sales@proxmox.com。
1.10.5. Bug Tracker 1.10.5. 缺陷跟踪器
Proxmox runs a public bug tracker at https://bugzilla.proxmox.com. If an issue
appears, file your report there. An issue can be a bug as well as a request for
a new feature or enhancement. The bug tracker helps to keep track of the issue
and will send a notification once it has been solved.
Proxmox 在 https://bugzilla.proxmox.com 运行一个公开的缺陷跟踪器。如果出现问题,请在那里提交报告。问题可以是缺陷,也可以是新功能或改进的请求。缺陷跟踪器有助于跟踪问题,并在问题解决后发送通知。
1.11. Project History 1.11. 项目历史
The project started in 2007, followed by a first stable version in 2008. At the
time we used OpenVZ for containers, and QEMU with KVM for virtual machines. The
clustering features were limited, and the user interface was simple (server
generated web page).
该项目始于 2007 年,2008 年发布了第一个稳定版本。当时我们使用 OpenVZ 作为容器技术,QEMU 配合 KVM 用于虚拟机。集群功能有限,用户界面也很简单(服务器生成的网页)。
But we quickly developed new features using the
Corosync cluster stack, and the
introduction of the new Proxmox cluster file system (pmxcfs) was a big step
forward, because it completely hides the cluster complexity from the user.
Managing a cluster of 16 nodes is as simple as managing a single node.
但我们很快利用 Corosync 集群栈开发了新功能,新的 Proxmox 集群文件系统(pmxcfs)的引入是一个重大进步,因为它完全将集群的复杂性对用户隐藏。管理 16 个节点的集群就像管理单个节点一样简单。
The introduction of our new REST API, with a complete declarative specification
written in JSON-Schema, enabled other people to integrate Proxmox VE into their
infrastructure, and made it easy to provide additional services.
我们引入了新的 REST API,配备了用 JSON-Schema 编写的完整声明式规范,使其他人能够将 Proxmox VE 集成到他们的基础设施中,并且便于提供额外的服务。
Also, the new REST API made it possible to replace the original user interface
with a modern client side single-page application using JavaScript. We also
replaced the old Java based VNC console code with
noVNC. So you only need a web browser to manage
your VMs.
此外,新的 REST API 使得用基于 JavaScript 的现代客户端单页应用替代原有的用户界面成为可能。我们还用 noVNC 替换了旧的基于 Java 的 VNC 控制台代码。因此,您只需一个网页浏览器即可管理您的虚拟机。
The support for various storage types is another big task. Notably, Proxmox VE was
the first distribution to ship ZFS on Linux by default
in 2014. Another milestone was the ability to run and manage
Ceph storage on the hypervisor nodes. Such setups are
extremely cost effective.
对各种存储类型的支持是另一项重大任务。值得注意的是,Proxmox VE 在 2014 年成为首个默认集成 Linux 上 ZFS 的发行版。另一个里程碑是能够在虚拟化节点上运行和管理 Ceph 存储。这类配置极具成本效益。
When our project started we were among the first companies providing commercial
support for KVM. The KVM project itself continuously evolved, and is now a
widely used hypervisor. New features arrive with each release. We developed the
KVM live backup feature, which makes it possible to create snapshot backups on
any storage type.
当我们的项目启动时,我们是最早提供 KVM 商业支持的公司之一。KVM 项目本身不断发展,如今已成为广泛使用的虚拟机监控程序。每个版本都会带来新功能。我们开发了 KVM 在线备份功能,使得可以在任何存储类型上创建快照备份。
The most notable change with version 4.0 was the move from OpenVZ to
LXC. Containers are now deeply integrated, and
they can use the same storage and network features as virtual machines. At the
same time we introduced the easy-to-use High
Availability (HA) manager, simplifying the configuration and management of
highly available setups.
版本 4.0 最显著的变化是从 OpenVZ 转向 LXC。容器现在被深度集成,可以使用与虚拟机相同的存储和网络功能。与此同时,我们引入了易于使用的高可用性(HA)管理器,简化了高可用性配置和管理。
During the development of Proxmox VE 5 the asynchronous
storage replication as well as automated
certificate management using ACME/Let’s
Encrypt were introduced, among many other features.
在 Proxmox VE 5 的开发过程中,引入了异步存储复制以及使用 ACME/Let’s Encrypt 的自动证书管理功能,还有许多其他特性。
The Software Defined Network (SDN) stack was developed in
cooperation with our community. It was integrated into the web interface as
an experimental feature in version 6.2, simplifying the management of
sophisticated network configurations. Since version 8.1, the SDN integration is
fully supported and installed by default.
软件定义网络(SDN)堆栈是在与社区合作下开发的。它作为实验性功能集成到 6.2 版本的网页界面中,简化了复杂网络配置的管理。自 8.1 版本起,SDN 集成得到全面支持并默认安装。
2020 marked the release of a new project, the
Proxmox
Backup Server, written in the Rust programming language. Proxmox Backup Server
is deeply integrated with Proxmox VE and significantly improves backup capabilities
by implementing incremental backups, deduplication, and much more.
2020 年发布了一个新项目——Proxmox 备份服务器,该项目使用 Rust 编程语言编写。Proxmox 备份服务器与 Proxmox VE 深度集成,通过实现增量备份、去重等功能,显著提升了备份能力。
Another new tool, the Proxmox
Offline Mirror, was released in 2022, enabling subscriptions for systems which
have no connection to the public internet.
另一个新工具,Proxmox 离线镜像,于 2022 年发布,使得没有公共互联网连接的系统也能使用订阅服务。
The highly requested dark theme for the web interface was introduced in 2023.
Later that year, version 8.0 integrated access to the Ceph enterprise
repository. Now access to the most stable Ceph repository comes with any
Proxmox VE subscription.
备受期待的网页界面暗色主题于 2023 年推出。同年晚些时候,8.0 版本集成了对 Ceph 企业代码仓库的访问。现在,任何 Proxmox VE 订阅都包含对最稳定 Ceph 代码仓库的访问权限。
Automated and unattended installation for the official
ISO installer was introduced in version 8.2,
significantly simplifying large deployments of Proxmox VE.
官方 ISO 安装程序的自动化和无人值守安装功能在 8.2 版本中引入,大大简化了 Proxmox VE 的大规模部署。
With the import wizard, equally introduced in
version 8.2, users can easily and efficiently migrate guests directly from other
hypervisors like VMware ESXi [1].
Additionally, archives in Open Virtualization Format (OVF/OVA) can now be
directly imported from file-based storages in the web interface.
同样在 8.2 版本中引入的导入向导,使用户能够轻松高效地直接从其他虚拟机管理程序如 VMware ESXi 迁移虚拟机。此外,网页界面现在支持直接从基于文件的存储导入开放虚拟化格式(OVF/OVA)归档。
1.12. Improving the Proxmox VE Documentation
1.12. 改进 Proxmox VE 文档
Contributions and improvements to the Proxmox VE documentation are always welcome.
There are several ways to contribute.
欢迎对 Proxmox VE 文档进行贡献和改进。有多种方式可以参与。
If you find errors or other room for improvement in this documentation, please
file a bug at the Proxmox bug tracker to propose
a correction.
如果您发现文档中有错误或其他改进空间,请在 Proxmox 缺陷跟踪系统中提交一个缺陷报告,提出修正建议。
If you want to propose new content, choose one of the following options:
如果您想提出新的内容,请选择以下选项之一:
-
The wiki: For specific setups, how-to guides, or tutorials the wiki is the right option to contribute.
维基:对于特定的设置、操作指南或教程,维基是贡献的合适选择。 -
The reference documentation: For general content that will be helpful to all users please propose your contribution for the reference documentation. This includes all information about how to install, configure, use, and troubleshoot Proxmox VE features. The reference documentation is written in the asciidoc format. To edit the documentation you need to clone the git repository at git://git.proxmox.com/git/pve-docs.git; then follow the README.adoc document.
参考文档:对于对所有用户都有帮助的一般内容,请将您的贡献提议提交到参考文档中。这包括有关如何安装、配置、使用和排除 Proxmox VE 功能故障的所有信息。参考文档采用 asciidoc 格式编写。要编辑文档,您需要克隆代码仓库 git://git.proxmox.com/git/pve-docs.git;然后按照 README.adoc 文档进行操作。
|
|
If you are interested in working on the Proxmox VE codebase, the
Developer Documentation wiki article will
show you where to start. 如果您有兴趣参与 Proxmox VE 代码库的开发,开发者文档维基文章将向您展示从哪里开始。 |
1.13. Translating Proxmox VE
1.13. 翻译 Proxmox VE
The Proxmox VE user interface is in English by default. However, thanks to the
contributions of the community, translations to other languages are also available.
We welcome any support in adding new languages, translating the latest features, and
improving incomplete or inconsistent translations.
Proxmox VE 用户界面默认是英文的。不过,感谢社区的贡献,也提供了其他语言的翻译。我们欢迎任何支持,帮助添加新语言、翻译最新功能以及改进不完整或不一致的翻译。
We use gettext for the management of the
translation files. Tools like Poedit offer a nice user
interface to edit the translation files, but you can use whatever editor you’re
comfortable with. No programming knowledge is required for translating.
我们使用 gettext 来管理翻译文件。像 Poedit 这样的工具提供了一个友好的用户界面来编辑翻译文件,但你也可以使用任何你习惯的编辑器。翻译不需要编程知识。
1.13.1. Translating with git
1.13.1. 使用 git 进行翻译
The language files are available as a
git repository. If you are familiar
with git, please contribute according to our
Developer Documentation.
语言文件以 git 代码仓库的形式提供。如果你熟悉 git,请根据我们的开发者文档进行贡献。
You can create a new translation by doing the following (replace <LANG> with the
language ID):
您可以通过以下方式创建新的翻译(将 <LANG> 替换为语言 ID):
# git clone git://git.proxmox.com/git/proxmox-i18n.git # cd proxmox-i18n # make init-<LANG>.po
Or you can edit an existing translation, using the editor of your choice:
或者您可以使用您选择的编辑器编辑现有的翻译:
# poedit <LANG>.po
1.13.2. Translating without git
1.13.2. 无需使用 git 进行翻译
Even if you are not familiar with git, you can help translate Proxmox VE.
To start, you can download the language files
here. Find the
language you want to improve, then right click on the "raw" link of this language
file and select Save Link As…. Make your changes to the file, and then
send your final translation directly to office(at)proxmox.com, together with a
signed
contributor license agreement.
即使您不熟悉 git,也可以帮助翻译 Proxmox VE。首先,您可以在此处下载语言文件。找到您想要改进的语言,然后右键点击该语言文件的“raw”链接,选择“另存为…”。对文件进行修改后,将您的最终翻译连同签署的贡献者许可协议一起直接发送至 office(at)proxmox.com。
1.13.3. Testing the Translation
1.13.3. 测试翻译
In order for the translation to be used in Proxmox VE, you must first translate
the .po file into a .js file. You can do this by invoking the following script,
which is located in the same repository:
为了使翻译能够在 Proxmox VE 中使用,您必须先将 .po 文件转换为 .js 文件。您可以通过调用位于同一代码仓库中的以下脚本来完成此操作:
# ./po2js.pl -t pve xx.po >pve-lang-xx.js
The resulting file pve-lang-xx.js can then be copied to the directory
/usr/share/pve-i18n, on your proxmox server, in order to test it out.
生成的文件 pve-lang-xx.js 然后可以复制到您的 proxmox 服务器上的 /usr/share/pve-i18n 目录中,以便进行测试。
Alternatively, you can build a deb package by running the following command from
the root of the repository:
或者,您也可以在代码仓库根目录下运行以下命令来构建一个 deb 包:
# make deb
|
|
For either of these methods to work, you need to have the following
perl packages installed on your system. For Debian/Ubuntu: 要使这两种方法中的任何一种生效,您需要在系统上安装以下 perl 软件包。对于 Debian/Ubuntu: |
# apt-get install perl liblocale-po-perl libjson-perl
1.13.4. Sending the Translation
1.13.4. 发送翻译文件
You can send the finished translation (.po file) to the Proxmox team at the address
office(at)proxmox.com, along with a signed contributor license agreement.
Alternatively, if you have some developer experience, you can send it as a
patch to the Proxmox VE development mailing list. See
Developer Documentation.
您可以将完成的翻译文件(.po 文件)发送到 Proxmox 团队,邮箱地址为 office(at)proxmox.com,同时附上签署的贡献者许可协议。或者,如果您有一定的开发经验,也可以将其作为补丁发送到 Proxmox VE 开发邮件列表。详情请参见开发者文档。
2. Installing Proxmox VE
2. 安装 Proxmox VE
Proxmox VE is based on Debian. This is why the install disk images (ISO files)
provided by Proxmox include a complete Debian system as well as all necessary
Proxmox VE packages.
Proxmox VE 基于 Debian。这就是为什么 Proxmox 提供的安装光盘映像(ISO 文件)包含完整的 Debian 系统以及所有必要的 Proxmox VE 软件包。
|
|
See the support table in the FAQ for the
relationship between Proxmox VE releases and Debian releases. 请参阅常见问题解答中的支持表,了解 Proxmox VE 版本与 Debian 版本之间的对应关系。 |
The installer will guide you through the setup, allowing you to partition the
local disk(s), apply basic system configurations (for example, timezone,
language, network) and install all required packages. This process should not
take more than a few minutes. Installing with the provided ISO is the
recommended method for new and existing users.
安装程序将引导您完成设置,允许您对本地磁盘进行分区,应用基本系统配置(例如时区、语言、网络)并安装所有必需的软件包。此过程通常不会超过几分钟。使用提供的 ISO 进行安装是新用户和现有用户推荐的方法。
Alternatively, Proxmox VE can be installed on top of an existing Debian system. This
option is only recommended for advanced users because detailed knowledge about
Proxmox VE is required.
另外,Proxmox VE 也可以安装在现有的 Debian 系统之上。此选项仅推荐给高级用户,因为需要对 Proxmox VE 有详细的了解。
2.1. System Requirements
2.1. 系统要求
We recommend using high quality server hardware, when running Proxmox VE in
production. To further decrease the impact of a failed host, you can run Proxmox VE in
a cluster with highly available (HA) virtual machines and containers.
我们建议在生产环境中运行 Proxmox VE 时使用高质量的服务器硬件。为了进一步减少主机故障的影响,您可以将 Proxmox VE 运行在具有高可用(HA)虚拟机和容器的集群中。
Proxmox VE can use local storage (DAS), SAN, NAS, and distributed storage like Ceph
RBD. For details see chapter storage.
Proxmox VE 可以使用本地存储(DAS)、SAN、NAS 以及像 Ceph RBD 这样的分布式存储。详情请参见存储章节。
2.1.1. Minimum Requirements, for Evaluation
2.1.1. 评估的最低要求
These minimum requirements are for evaluation purposes only and should not be
used in production.
这些最低要求仅供评估用途,不应用于生产环境。
-
CPU: 64bit (Intel 64 or AMD64)
CPU:64 位(Intel 64 或 AMD64) -
Intel VT/AMD-V capable CPU/motherboard for KVM full virtualization support
支持 Intel VT/AMD-V 的 CPU/主板,以支持 KVM 全虚拟化 -
RAM: 1 GB RAM, plus additional RAM needed for guests
内存:1 GB 内存,外加虚拟机所需的额外内存 -
Hard drive 硬盘
-
One network card (NIC)
一块网卡(NIC)
2.1.2. Recommended System Requirements
2.1.2. 推荐系统要求
-
Intel 64 or AMD64 with Intel VT/AMD-V CPU flag.
带有 Intel VT/AMD-V CPU 标志的 Intel 64 或 AMD64。 -
Memory: Minimum 2 GB for the OS and Proxmox VE services, plus designated memory for guests. For Ceph and ZFS, additional memory is required; approximately 1GB of memory for every TB of used storage.
内存:操作系统和 Proxmox VE 服务至少需要 2 GB 内存,此外还需为虚拟机分配指定内存。对于 Ceph 和 ZFS,需要额外内存;大约每使用 1TB 存储需要 1GB 内存。 -
Fast and redundant storage, best results are achieved with SSDs.
快速且冗余的存储,使用 SSD 可获得最佳效果。 -
OS storage: Use a hardware RAID with battery protected write cache (“BBU”) or non-RAID with ZFS (optional SSD for ZIL).
操作系统存储:使用带有电池保护写缓存(“BBU”)的硬件 RAID,或非 RAID 配置配合 ZFS(可选用于 ZIL 的 SSD)。 -
VM storage: 虚拟机存储:
-
For local storage, use either a hardware RAID with battery backed write cache (BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are compatible with a hardware RAID controller.
对于本地存储,使用带有电池备份写缓存(BBU)的硬件 RAID,或者对于 ZFS 和 Ceph 使用非 RAID。ZFS 和 Ceph 均不兼容硬件 RAID 控制器。 -
Shared and distributed storage is possible.
共享和分布式存储是可行的。 -
SSDs with Power-Loss-Protection (PLP) are recommended for good performance. Using consumer SSDs is discouraged.
建议使用具备断电保护(PLP)的 SSD 以获得良好性能。不建议使用消费级 SSD。
-
-
Redundant (Multi-)Gbit NICs, with additional NICs depending on the preferred storage technology and cluster setup.
冗余的多千兆位网卡,根据首选存储技术和集群设置,可能需要额外的网卡。 -
For PCI(e) passthrough the CPU needs to support the VT-d/AMD-d flag.
对于 PCI(e)直通,CPU 需要支持 VT-d/AMD-d 标志。
2.1.3. Simple Performance Overview
2.1.3. 简单性能概述
To get an overview of the CPU and hard disk performance on an installed Proxmox VE
system, run the included pveperf tool.
要了解已安装 Proxmox VE 系统的 CPU 和硬盘性能,请运行随附的 pveperf 工具。
|
|
This is just a very quick and general benchmark. More detailed tests are
recommended, especially regarding the I/O performance of your system. 这只是一个非常快速且通用的基准测试。建议进行更详细的测试,特别是关于系统的 I/O 性能。 |
2.1.4. Supported Web Browsers for Accessing the Web Interface
2.1.4. 支持访问用户界面的网页浏览器
To access the web-based user interface, we recommend using one of the following
browsers:
要访问基于网页的用户界面,建议使用以下浏览器之一:
-
Firefox, a release from the current year, or the latest Extended Support Release
Firefox,当年发布的版本,或最新的扩展支持版本 -
Chrome, a release from the current year
Chrome,当年发布的版本 -
Microsoft’s currently supported version of Edge
Microsoft 当前支持的 Edge 版本 -
Safari, a release from the current year
Safari,当前年份发布的版本
When accessed from a mobile device, Proxmox VE will show a lightweight, touch-based
interface.
当从移动设备访问时,Proxmox VE 将显示一个轻量级的触控界面。
2.2. Prepare Installation Media
2.2. 准备安装介质
Download the installer ISO image from: https://www.proxmox.com/en/downloads/proxmox-virtual-environment/iso
从以下网址下载安装程序 ISO 镜像:https://www.proxmox.com/en/downloads/proxmox-virtual-environment/iso
The Proxmox VE installation media is a hybrid ISO image. It works in two ways:
Proxmox VE 安装介质是一个混合 ISO 镜像。它有两种使用方式:
-
An ISO image file ready to burn to a CD or DVD.
一个可以刻录到 CD 或 DVD 的 ISO 镜像文件。 -
A raw sector (IMG) image file ready to copy to a USB flash drive (USB stick).
一个原始扇区(IMG)镜像文件,可以直接复制到 USB 闪存驱动器(U 盘)。
Using a USB flash drive to install Proxmox VE is the recommended way because it is
the faster option.
使用 USB 闪存驱动器安装 Proxmox VE 是推荐的方法,因为它速度更快。
2.2.1. Prepare a USB Flash Drive as Installation Medium
2.2.1. 准备 USB 闪存驱动器作为安装介质
The flash drive needs to have at least 1 GB of storage available.
闪存驱动器需要至少有 1 GB 的可用存储空间。
|
|
Do not use UNetbootin. It does not work with the Proxmox VE installation image. 不要使用 UNetbootin。它无法与 Proxmox VE 安装镜像配合使用。 |
|
|
Make sure that the USB flash drive is not mounted and does not
contain any important data. 确保 USB 闪存驱动器未被挂载且不包含任何重要数据。 |
2.2.2. Instructions for GNU/Linux
2.2.2. GNU/Linux 使用说明
On Unix-like operating system use the dd command to copy the ISO image to the
USB flash drive. First find the correct device name of the USB flash drive (see
below). Then run the dd command.
在类 Unix 操作系统上,使用 dd 命令将 ISO 镜像复制到 USB 闪存驱动器。首先找到 USB 闪存驱动器的正确设备名称(见下文)。然后运行 dd 命令。
# dd bs=1M conv=fdatasync if=./proxmox-ve_*.iso of=/dev/XYZ
|
|
Be sure to replace /dev/XYZ with the correct device name and adapt the
input filename (if) path. 务必将/dev/XYZ 替换为正确的设备名称,并根据需要调整输入文件名(if)的路径。 |
|
|
Be very careful, and do not overwrite the wrong disk! 务必小心,切勿覆盖错误的磁盘! |
Find the Correct USB Device Name
找到正确的 USB 设备名称
There are two ways to find out the name of the USB flash drive. The first one is
to compare the last lines of the dmesg command output before and after
plugging in the flash drive. The second way is to compare the output of the
lsblk command. Open a terminal and run:
有两种方法可以找出 USB 闪存驱动器的名称。第一种是比较插入闪存驱动器前后 dmesg 命令输出的最后几行。第二种方法是比较 lsblk 命令的输出。打开终端并运行:
# lsblk
Then plug in your USB flash drive and run the command again:
然后插入 USB 闪存驱动器,再次运行该命令:
# lsblk
A new device will appear. This is the one you want to use. To be on the extra
safe side check if the reported size matches your USB flash drive.
一个新设备将会出现。这就是你想要使用的设备。为了更加安全,检查报告的大小是否与你的 USB 闪存驱动器匹配。
2.2.3. Instructions for macOS
2.2.3. macOS 使用说明
Open the terminal (query Terminal in Spotlight).
打开终端(在聚焦搜索中查询“终端”)。
Convert the .iso file to .dmg format using the convert option of hdiutil,
for example:
使用 hdiutil 的 convert 选项将 .iso 文件转换为 .dmg 格式,例如:
# hdiutil convert proxmox-ve_*.iso -format UDRW -o proxmox-ve_*.dmg
|
|
macOS tends to automatically add .dmg to the output file name. macOS 通常会自动在输出文件名后添加 .dmg。 |
To get the current list of devices run the command:
要获取当前设备列表,请运行以下命令:
# diskutil list
Now insert the USB flash drive and run this command again to determine which
device node has been assigned to it. (e.g., /dev/diskX).
现在插入 USB 闪存驱动器,再次运行此命令以确定分配给它的设备节点。(例如,/dev/diskX)。
# diskutil list # diskutil unmountDisk /dev/diskX
|
|
replace X with the disk number from the last command. 将 X 替换为上一个命令中显示的磁盘编号。 |
# sudo dd if=proxmox-ve_*.dmg bs=1M of=/dev/rdiskX
|
|
rdiskX, instead of diskX, in the last command is intended. It will
increase the write speed. 最后一条命令中使用的是 rdiskX,而不是 diskX。这将提高写入速度。 |
2.2.4. Instructions for Windows
2.2.4. Windows 使用说明
Using Etcher 使用 Etcher
Etcher works out of the box. Download Etcher from https://etcher.io. It will
guide you through the process of selecting the ISO and your USB flash drive.
Etcher 开箱即用。请从 https://etcher.io 下载 Etcher。它会引导你完成选择 ISO 文件和 USB 闪存驱动器的过程。
Using Rufus 使用 Rufus
Rufus is a more lightweight alternative, but you need to use the DD mode to
make it work. Download Rufus from https://rufus.ie/. Either install it or use
the portable version. Select the destination drive and the Proxmox VE ISO file.
Rufus 是一个更轻量级的替代工具,但你需要使用 DD 模式才能使其正常工作。请从 https://rufus.ie/ 下载 Rufus。你可以选择安装它或使用便携版本。选择目标驱动器和 Proxmox VE ISO 文件。
|
|
Once you Start you have to click No on the dialog asking to
download a different version of GRUB. In the next dialog select the DD mode. 点击开始后,弹出的对话框询问是否下载不同版本的 GRUB 时,请点击“否”。在下一个对话框中选择 DD 模式。 |
2.3. Using the Proxmox VE Installer
2.3. 使用 Proxmox VE 安装程序
The installer ISO image includes the following:
安装程序 ISO 镜像包括以下内容:
-
Complete operating system (Debian Linux, 64-bit)
完整的操作系统(Debian Linux,64 位) -
The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS, BTRFS (technology preview), or ZFS and installs the operating system
Proxmox VE 安装程序,使用 ext4、XFS、BTRFS(技术预览)或 ZFS 对本地磁盘进行分区并安装操作系统 -
Proxmox VE Linux kernel with KVM and LXC support
支持 KVM 和 LXC 的 Proxmox VE Linux 内核 -
Complete toolset for administering virtual machines, containers, the host system, clusters and all necessary resources
完整的工具集,用于管理虚拟机、容器、主机系统、集群及所有必要资源 -
Web-based management interface
基于网页的管理界面
|
|
All existing data on the selected drives will be removed during the
installation process. The installer does not add boot menu entries for other
operating systems. 安装过程中,所选驱动器上的所有现有数据将被删除。安装程序不会为其他操作系统添加启动菜单项。 |
Please insert the prepared installation media
(for example, USB flash drive or CD-ROM) and boot from it.
请插入准备好的安装介质(例如,USB 闪存驱动器或光盘),并从中启动。
|
|
Make sure that booting from the installation medium (for example, USB) is
enabled in your server’s firmware settings. Secure boot needs to be disabled
when booting an installer prior to Proxmox VE version 8.1. 确保在服务器的固件设置中启用了从安装介质(例如,USB)启动。在启动安装程序时,Proxmox VE 8.1 版本之前需要禁用安全启动。 |
After choosing the correct entry (for example, Boot from USB) the Proxmox VE menu
will be displayed, and one of the following options can be selected:
选择正确的启动项(例如,从 USB 启动)后,将显示 Proxmox VE 菜单,可以选择以下选项之一:
-
Install Proxmox VE (Graphical)
安装 Proxmox VE(图形界面) -
Starts the normal installation.
启动正常安装。
|
|
It’s possible to use the installation wizard with a keyboard only. Buttons
can be clicked by pressing the ALT key combined with the underlined character
from the respective button. For example, ALT + N to press a Next button. 可以仅使用键盘操作安装向导。通过按住 ALT 键并结合相应按钮下划线标记的字符,可以点击按钮。例如,按 ALT + N 以点击“下一步”按钮。 |
-
Install Proxmox VE (Terminal UI)
安装 Proxmox VE(终端界面) -
Starts the terminal-mode installation wizard. It provides the same overall installation experience as the graphical installer, but has generally better compatibility with very old and very new hardware.
启动终端模式安装向导。它提供与图形安装程序相同的整体安装体验,但通常对非常旧和非常新的硬件具有更好的兼容性。 -
Install Proxmox VE (Terminal UI, Serial Console)
安装 Proxmox VE(终端界面,串口控制台) -
Starts the terminal-mode installation wizard, additionally setting up the Linux kernel to use the (first) serial port of the machine for in- and output. This can be used if the machine is completely headless and only has a serial console available.
启动终端模式安装向导,同时设置 Linux 内核使用机器的(第一个)串口进行输入和输出。如果机器完全无头且仅有串口控制台可用,则可以使用此方式。
Both modes use the same code base for the actual installation process to
benefit from more than a decade of bug fixes and ensure feature parity.
两种模式在实际安装过程中使用相同的代码基础,以利用十多年的错误修复成果并确保功能一致性。
|
|
The Terminal UI option can be used in case the graphical installer does
not work correctly, due to e.g. driver issues. See also
adding the nomodeset kernel parameter. 如果图形安装程序因驱动问题等原因无法正常工作,可以使用终端用户界面选项。另请参见添加 nomodeset 内核参数。 |
-
Advanced Options: Install Proxmox VE (Graphical, Debug Mode)
高级选项:安装 Proxmox VE(图形,调试模式) -
Starts the installation in debug mode. A console will be opened at several installation steps. This helps to debug the situation if something goes wrong. To exit a debug console, press CTRL-D. This option can be used to boot a live system with all basic tools available. You can use it, for example, to repair a degraded ZFS rpool or fix the bootloader for an existing Proxmox VE setup.
以调试模式启动安装。在多个安装步骤中会打开一个控制台。如果出现问题,这有助于调试情况。要退出调试控制台,请按 CTRL-D。此选项可用于启动带有所有基本工具的实时系统。例如,您可以使用它来修复降级的 ZFS rpool 或修复现有 Proxmox VE 设置的引导加载程序。 -
Advanced Options: Install Proxmox VE (Terminal UI, Debug Mode)
高级选项:安装 Proxmox VE(终端界面,调试模式) -
Same as the graphical debug mode, but preparing the system to run the terminal-based installer instead.
与图形调试模式相同,但准备系统以运行基于终端的安装程序。 -
Advanced Options: Install Proxmox VE (Serial Console Debug Mode)
高级选项:安装 Proxmox VE(串口控制台调试模式) -
Same the terminal-based debug mode, but additionally sets up the Linux kernel to use the (first) serial port of the machine for in- and output.
与基于终端的调试模式相同,但额外设置 Linux 内核使用机器的(第一个)串口进行输入和输出。 -
Advanced Options: Install Proxmox VE (Automated)
高级选项:安装 Proxmox VE(自动化) -
Starts the installer in unattended mode, even if the ISO has not been appropriately prepared for an automated installation. This option can be used to gather hardware details or might be useful to debug an automated installation setup. See Unattended Installation for more information.
即使 ISO 未被适当准备为自动安装,也以无人值守模式启动安装程序。此选项可用于收集硬件详细信息,或可能有助于调试自动安装设置。更多信息请参见无人值守安装。 -
Advanced Options: Rescue Boot
高级选项:救援启动 -
With this option you can boot an existing installation. It searches all attached hard disks. If it finds an existing installation, it boots directly into that disk using the Linux kernel from the ISO. This can be useful if there are problems with the bootloader (GRUB/systemd-boot) or the BIOS/UEFI is unable to read the boot block from the disk.
使用此选项可以启动现有的安装。它会搜索所有连接的硬盘。如果找到现有的安装,它将使用 ISO 中的 Linux 内核直接从该硬盘启动。这在引导加载程序(GRUB/systemd-boot)出现问题或 BIOS/UEFI 无法读取硬盘上的引导块时非常有用。 -
Advanced Options: Test Memory (memtest86+)
高级选项:测试内存(memtest86+) -
Runs memtest86+. This is useful to check if the memory is functional and free of errors. Secure Boot must be turned off in the UEFI firmware setup utility to run this option.
运行 memtest86+。这对于检查内存是否正常且无错误非常有用。必须在 UEFI 固件设置工具中关闭安全启动才能运行此选项。
You normally select Install Proxmox VE (Graphical) to start the installation.
通常选择“安装 Proxmox VE(图形界面)”来开始安装。
The first step is to read our EULA (End User License Agreement). Following this,
you can select the target hard disk(s) for the installation.
第一步是阅读我们的最终用户许可协议(EULA)。随后,您可以选择安装的目标硬盘。
|
|
By default, the whole server is used and all existing data is removed.
Make sure there is no important data on the server before proceeding with the
installation. 默认情况下,整个服务器将被使用,且所有现有数据将被删除。请确保服务器上没有重要数据后再继续安装。 |
The Options button lets you select the target file system, which
defaults to ext4. The installer uses LVM if you select
ext4 or xfs as a file system, and offers additional options to
restrict LVM space (see below).
“选项”按钮允许您选择目标文件系统,默认是 ext4。安装程序在选择 ext4 或 xfs 文件系统时会使用 LVM,并提供额外选项以限制 LVM 空间(见下文)。
Proxmox VE can also be installed on ZFS. As ZFS offers several software RAID levels,
this is an option for systems that don’t have a hardware RAID controller. The
target disks must be selected in the Options dialog. More ZFS specific
settings can be changed under Advanced Options.
Proxmox VE 也可以安装在 ZFS 上。由于 ZFS 提供多种软件 RAID 级别,这对于没有硬件 RAID 控制器的系统来说是一个选项。目标磁盘必须在“选项”对话框中选择。更多 ZFS 特定设置可以在“高级选项”中更改。
|
|
ZFS on top of any hardware RAID is not supported and can result in data
loss. 不支持在任何硬件 RAID 之上使用 ZFS,这可能导致数据丢失。 |
The next page asks for basic configuration options like your location, time
zone, and keyboard layout. The location is used to select a nearby download
server, in order to increase the speed of updates. The installer is usually able
to auto-detect these settings, so you only need to change them in rare
situations when auto-detection fails, or when you want to use a keyboard layout
not commonly used in your country.
下一页会询问一些基本配置选项,如您的位置、时区和键盘布局。位置用于选择一个附近的下载服务器,以提高更新速度。安装程序通常能够自动检测这些设置,因此只有在自动检测失败或您想使用在您所在国家不常用的键盘布局时,才需要更改它们。
Next the password of the superuser (root) and an email address needs to be
specified. The password must consist of at least 8 characters. It’s highly
recommended to use a stronger password. Some guidelines are:
接下来需要指定超级用户(root)的密码和电子邮件地址。密码必须至少包含 8 个字符。强烈建议使用更强的密码。以下是一些指导原则:
-
Use a minimum password length of at least 12 characters.
密码长度至少应为 12 个字符。 -
Include lowercase and uppercase alphabetic characters, numbers, and symbols.
包括小写和大写字母、数字及符号。 -
Avoid character repetition, keyboard patterns, common dictionary words, letter or number sequences, usernames, relative or pet names, romantic links (current or past), and biographical information (for example ID numbers, ancestors' names or dates).
避免字符重复、键盘模式、常见字典单词、字母或数字序列、用户名、亲属或宠物名字、恋爱关系(当前或过去)以及传记信息(例如身份证号码、祖先姓名或日期)。
The email address is used to send notifications to the system administrator.
For example:
电子邮件地址用于向系统管理员发送通知。例如:
-
Information about available package updates.
有关可用包更新的信息。 -
Error messages from periodic cron jobs.
定期 cron 作业的错误消息。
The last step is the network configuration. Network interfaces that are UP
show a filled circle in front of their name in the drop down menu. Please note
that during installation you can either specify an IPv4 or IPv6 address, but not
both. To configure a dual stack node, add additional IP addresses after the
installation.
最后一步是网络配置。处于 UP 状态的网络接口在下拉菜单中其名称前会显示一个实心圆。请注意,安装过程中您只能指定 IPv4 或 IPv6 地址中的一种,不能同时指定两者。要配置双栈节点,请在安装后添加额外的 IP 地址。
The next step shows a summary of the previously selected options. Please
re-check every setting and use the Previous button if a setting needs to be
changed.
下一步显示之前选择选项的摘要。请重新检查每个设置,如果需要更改设置,请使用“上一步”按钮。
After clicking Install, the installer will begin to format the disks and copy
packages to the target disk(s). Please wait until this step has finished; then
remove the installation medium and restart your system.
点击安装后,安装程序将开始格式化磁盘并将软件包复制到目标磁盘。请等待此步骤完成;然后移除安装介质并重启系统。
Copying the packages usually takes several minutes, mostly depending on the
speed of the installation medium and the target disk performance.
复制软件包通常需要几分钟时间,主要取决于安装介质的速度和目标磁盘的性能。
When copying and setting up the packages has finished, you can reboot the
server. This will be done automatically after a few seconds by default.
当复制和设置软件包完成后,您可以重启服务器。默认情况下,几秒钟后系统会自动重启。
If the installation failed, check out specific errors on the second TTY
(CTRL + ALT + F2) and ensure that the systems meets the
minimum requirements.
如果安装失败,请在第二个 TTY(CTRL + ALT + F2)查看具体错误,并确保系统满足最低要求。
If the installation is still not working, look at the
how to get help chapter.
如果安装仍然无法进行,请查看“如何获取帮助”章节。
2.3.1. Accessing the Management Interface Post-Installation
2.3.1. 安装后访问管理界面
After a successful installation and reboot of the system you can use the Proxmox VE
web interface for further configuration.
系统成功安装并重启后,您可以使用 Proxmox VE 网页界面进行进一步配置。
-
Point your browser to the IP address given during the installation and port 8006, for example: https://youripaddress:8006
在浏览器中输入安装时提供的 IP 地址和端口 8006,例如:https://youripaddress:8006 -
Log in using the root (realm PAM) username and the password chosen during installation.
使用 root(PAM 领域)用户名和安装时选择的密码登录。 -
Upload your subscription key to gain access to the Enterprise repository. Otherwise, you will need to set up one of the public, less tested package repositories to get updates for security fixes, bug fixes, and new features.
上传您的订阅密钥以访问企业代码仓库。否则,您需要设置其中一个公共的、测试较少的包代码仓库,以获取安全修复、错误修复和新功能的更新。 -
Check the IP configuration and hostname.
检查 IP 配置和主机名。 -
Check the timezone. 检查时区。
-
Check your Firewall settings.
检查您的防火墙设置。
2.3.2. Advanced LVM Configuration Options
2.3.2. 高级 LVM 配置选项
The installer creates a Volume Group (VG) called pve, and additional Logical
Volumes (LVs) called root, data, and swap, if ext4 or xfs is used. To
control the size of these volumes use:
安装程序会创建一个名为 pve 的卷组(VG),如果使用 ext4 或 xfs,还会创建名为 root、data 和 swap 的额外逻辑卷(LV)。要控制这些卷的大小,请使用:
- hdsize 硬盘大小
-
Defines the total hard disk size to be used. This way you can reserve free space on the hard disk for further partitioning (for example for an additional PV and VG on the same hard disk that can be used for LVM storage).
定义要使用的硬盘总大小。通过这种方式,您可以在硬盘上保留空闲空间以便进一步分区(例如,在同一硬盘上为额外的 PV 和 VG 预留空间,这些可以用于 LVM 存储)。 - swapsize 交换分区大小
-
Defines the size of the swap volume. The default is the size of the installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater than hdsize/8.
定义交换分区的大小。默认值为已安装内存的大小,最小 4 GB,最大 8 GB。最终值不能大于硬盘大小的 1/8。If set to 0, no swap volume will be created.
如果设置为 0,则不会创建交换分区。 - maxroot
-
Defines the maximum size of the root volume, which stores the operation system. The maximum limit of the root volume size is hdsize/4.
定义根卷的最大大小,根卷用于存储操作系统。根卷大小的最大限制为 hdsize 的四分之一。 - maxvz
-
Defines the maximum size of the data volume. The actual size of the data volume is:
定义数据卷的最大大小。数据卷的实际大小为:datasize = hdsize - rootsize - swapsize - minfree
Where datasize cannot be bigger than maxvz.
其中 datasize 不能大于 maxvz。In case of LVM thin, the data pool will only be created if datasize is bigger than 4GB.
对于 LVM thin,只有当 datasize 大于 4GB 时,才会创建数据池。If set to 0, no data volume will be created and the storage configuration will be adapted accordingly.
如果设置为 0,则不会创建数据卷,并且存储配置将相应调整。 - minfree
-
Defines the amount of free space that should be left in the LVM volume group pve. With more than 128GB storage available, the default is 16GB, otherwise hdsize/8 will be used.
定义应在 LVM 卷组 pve 中保留的空闲空间量。当可用存储超过 128GB 时,默认值为 16GB,否则将使用 hdsize/8。LVM requires free space in the VG for snapshot creation (not required for lvmthin snapshots).
LVM 在卷组中需要空闲空间以创建快照(lvmthin 快照不需要)。
2.3.3. Advanced ZFS Configuration Options
2.3.3. 高级 ZFS 配置选项
The installer creates the ZFS pool rpool, if ZFS is used. No swap space is
created but you can reserve some unpartitioned space on the install disks for
swap. You can also create a swap zvol after the installation, although this can
lead to problems (see ZFS swap notes).
如果使用 ZFS,安装程序会创建 ZFS 池 rpool。不会创建交换空间,但您可以在安装磁盘上保留一些未分区的空间作为交换空间。您也可以在安装后创建交换 zvol,尽管这可能会导致问题(参见 ZFS 交换注意事项)。
- ashift
-
Defines the ashift value for the created pool. The ashift needs to be set at least to the sector-size of the underlying disks (2 to the power of ashift is the sector-size), or any disk which might be put in the pool (for example the replacement of a defective disk).
定义创建的池的 ashift 值。ashift 需要至少设置为底层磁盘的扇区大小(2 的 ashift 次方即为扇区大小),或者任何可能放入池中的磁盘(例如更换故障磁盘)。 - compress 压缩
-
Defines whether compression is enabled for rpool.
定义是否为 rpool 启用压缩。 - checksum 校验和
-
Defines which checksumming algorithm should be used for rpool.
定义应为 rpool 使用哪种校验和算法。 - copies 副本数
-
Defines the copies parameter for rpool. Check the zfs(8) manpage for the semantics, and why this does not replace redundancy on disk-level.
为 rpool 定义副本数参数。请查阅 zfs(8) 手册页了解其语义,以及为什么这不能替代磁盘级别的冗余。 - ARC max size ARC 最大大小
-
Defines the maximum size the ARC can grow to and thus limits the amount of memory ZFS will use. See also the section on how to limit ZFS memory usage for more details.
定义 ARC 可以增长到的最大大小,从而限制 ZFS 使用的内存量。更多细节请参见关于如何限制 ZFS 内存使用的章节。 - hdsize
-
Defines the total hard disk size to be used. This is useful to save free space on the hard disk(s) for further partitioning (for example to create a swap-partition). hdsize is only honored for bootable disks, that is only the first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
定义要使用的硬盘总大小。这对于在硬盘上保留空闲空间以便进一步分区(例如创建交换分区)非常有用。hdsize 仅对可启动磁盘生效,即仅对第一个磁盘或 RAID0、RAID1 或 RAID10 的镜像,以及 RAID-Z[123] 中的所有磁盘生效。
2.3.4. Advanced BTRFS Configuration Options
2.3.4. 高级 BTRFS 配置选项
No swap space is created when BTRFS is used but you can reserve some
unpartitioned space on the install disks for swap. You can either create a
separate partition, BTRFS subvolume or a swapfile using the btrfs filesystem
mkswapfile command.
使用 BTRFS 时不会创建交换空间,但您可以在安装磁盘上保留一些未分区的空间用于交换。您可以创建单独的分区、BTRFS 子卷,或使用 btrfs 文件系统的 mkswapfile 命令创建交换文件。
- compress 压缩
-
Defines whether compression is enabled for the BTRFS subvolume. Different compression algorithms are supported: on (equivalent to zlib), zlib, lzo and zstd. Defaults to off.
定义是否启用 BTRFS 子卷的压缩。支持不同的压缩算法:on(等同于 zlib)、zlib、lzo 和 zstd。默认关闭。 - hdsize 硬盘大小
-
Defines the total hard disk size to be used. This is useful to save free space on the hard disk(s) for further partitioning (for example, to create a swap partition).
定义要使用的硬盘总大小。这对于在硬盘上保留空闲空间以便进一步分区(例如,创建交换分区)非常有用。
2.3.5. ZFS Performance Tips
2.3.5. ZFS 性能提示
ZFS works best with a lot of memory. If you intend to use ZFS make sure to have
enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB
RAW disk space.
ZFS 在大量内存的情况下表现最佳。如果您打算使用 ZFS,请确保有足够的 RAM 可用。一个好的计算方法是 4GB 加上每 TB 原始磁盘空间 1GB RAM。
ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL).
Use a fast drive (SSD) for it. It can be added after installation with the
following command:
ZFS 可以使用专用驱动器作为写缓存,称为 ZFS 意图日志(ZIL)。请使用快速驱动器(SSD)作为写缓存。安装后可以通过以下命令添加:
# zpool add <pool-name> log </dev/path_to_fast_ssd>
2.3.6. Adding the nomodeset Kernel Parameter
2.3.6. 添加 nomodeset 内核参数
Problems may arise on very old or very new hardware due to graphics drivers. If
the installation hangs during boot, you can try adding the nomodeset
parameter. This prevents the Linux kernel from loading any graphics drivers and
forces it to continue using the BIOS/UEFI-provided framebuffer.
由于图形驱动程序,极旧或极新的硬件可能会出现问题。如果安装在启动时挂起,可以尝试添加 nomodeset 参数。此参数阻止 Linux 内核加载任何图形驱动程序,并强制其继续使用 BIOS/UEFI 提供的帧缓冲区。
On the Proxmox VE bootloader menu, navigate to Install Proxmox VE (Terminal UI) and
press e to edit the entry. Using the arrow keys, navigate to the line starting
with linux, move the cursor to the end of that line and add the
parameter nomodeset, separated by a space from the pre-existing last
parameter.
在 Proxmox VE 启动加载菜单中,导航到 Install Proxmox VE (终端 UI) 并按 e 编辑该条目。使用方向键,导航到以 linux 开头的行,将光标移动到该行末尾,并添加 nomodeset 参数,参数之间用空格与之前的最后一个参数分开。
Then press Ctrl-X or F10 to boot the configuration.
然后按 Ctrl-X 或 F10 启动该配置。
2.4. Unattended Installation
2.4. 无人值守安装
The automated installation method allows installing Proxmox VE
in an unattended manner. This enables you to fully automate the setup
process on bare-metal. Once the installation is complete and the host
has booted up, automation tools like Ansible can be used to further
configure the installation.
自动安装方法允许以无人值守的方式安装 Proxmox VE。这使您能够在裸机上完全自动化设置过程。安装完成并且主机启动后,可以使用 Ansible 等自动化工具进一步配置安装。
The necessary options for the installer must be provided in an answer
file. This file allows using filter rules to determine which disks and
network cards should be used.
安装程序所需的选项必须在答复文件中提供。该文件允许使用过滤规则来确定应使用哪些磁盘和网卡。
To use the automated installation, it is first necessary to choose a
source from which the answer file is fetched from and then prepare an
installation ISO with that choice.
要使用自动安装,首先需要选择一个来源以获取答复文件,然后使用该选择准备一个安装 ISO。
Once the ISO is prepared, its initial boot menu will show a new boot
entry named Automated Installation which gets automatically selected
after a 10-second timeout.
ISO 准备好后,其初始启动菜单将显示一个名为“自动安装”的新启动项,该项将在 10 秒超时后自动选择。
Visit our wiki for more
details and information on the unattended installation.
访问我们的维基,获取有关无人值守安装的更多详细信息和资料。
2.5. Install Proxmox VE on Debian
2.5. 在 Debian 上安装 Proxmox VE
Proxmox VE ships as a set of Debian packages and can be installed on top of a standard
Debian installation.
After configuring the repositories you need
to run the following commands:
Proxmox VE 以一组 Debian 软件包的形式发布,可以安装在标准的 Debian 系统之上。配置好软件源后,您需要运行以下命令:
# apt-get update # apt-get install proxmox-ve
Installing on top of an existing Debian installation looks easy, but it presumes
that the base system has been installed correctly and that you know how you want
to configure and use the local storage. You also need to configure the network
manually.
在现有的 Debian 系统上安装看起来很简单,但前提是基础系统已正确安装,并且您知道如何配置和使用本地存储。您还需要手动配置网络。
In general, this is not trivial, especially when LVM or ZFS is used.
一般来说,这并不简单,尤其是在使用 LVM 或 ZFS 时。
A detailed step by step how-to can be found on the
wiki.
详细的分步操作指南可以在维基上找到。
3. Host System Administration
3. 主机系统管理
The following sections will focus on common virtualization tasks and explain the
Proxmox VE specifics regarding the administration and management of the host machine.
以下章节将重点介绍常见的虚拟化任务,并解释 Proxmox VE 在主机机器管理和维护方面的具体内容。
Proxmox VE is based on Debian GNU/Linux with additional
repositories to provide the Proxmox VE related packages. This means that the full
range of Debian packages is available including security updates and bug fixes.
Proxmox VE provides its own Linux kernel based on the Ubuntu kernel. It has all the
necessary virtualization and container features enabled and includes
ZFS and several extra hardware drivers.
Proxmox VE 基于 Debian GNU/Linux,并附加了提供 Proxmox VE 相关包的仓库。这意味着可以使用完整的 Debian 包,包括安全更新和错误修复。Proxmox VE 提供了基于 Ubuntu 内核的自有 Linux 内核。该内核启用了所有必要的虚拟化和容器功能,并包含 ZFS 及若干额外的硬件驱动。
For other topics not included in the following sections, please refer to the
Debian documentation. The
Debian
Administrator's Handbook is available online, and provides a comprehensive
introduction to the Debian operating system (see [Hertzog13]).
对于以下章节未涵盖的其他主题,请参考 Debian 文档。Debian 管理员手册可在线获取,提供了对 Debian 操作系统的全面介绍(参见 [Hertzog13])。
3.1. Package Repositories
3.1. 包仓库
Proxmox VE uses APT as its
package management tool like any other Debian-based system.
Proxmox VE 使用 APT 作为其包管理工具,和其他基于 Debian 的系统一样。
Proxmox VE automatically checks for package updates on a daily basis. The root@pam
user is notified via email about available updates. From the GUI, the
Changelog button can be used to see more details about an selected update.
Proxmox VE 会自动每天检查包更新。root@pam 用户会通过电子邮件收到可用更新的通知。在图形界面中,可以使用“更新日志”按钮查看所选更新的详细信息。
3.1.1. Repositories in Proxmox VE
3.1.1. Proxmox VE 中的软件仓库
Repositories are a collection of software packages, they can be used to install
new software, but are also important to get new updates.
软件仓库是一组软件包,它们可以用来安装新软件,同时也对获取新更新非常重要。
|
|
You need valid Debian and Proxmox repositories to get the latest
security updates, bug fixes and new features. 您需要有效的 Debian 和 Proxmox 软件仓库,以获取最新的安全更新、错误修复和新功能。 |
APT Repositories are defined in the file /etc/apt/sources.list and in .list
files placed in /etc/apt/sources.list.d/.
APT 代码仓库定义在文件 /etc/apt/sources.list 中,以及放置在 /etc/apt/sources.list.d/ 目录下的 .list 文件中。
Repository Management 代码仓库管理
Since Proxmox VE 7, you can check the repository state in the web interface.
The node summary panel shows a high level status overview, while the separate
Repository panel shows in-depth status and list of all configured
repositories.
自 Proxmox VE 7 起,您可以在网页界面中检查代码仓库状态。节点摘要面板显示高级状态概览,而单独的代码仓库面板则显示详细状态和所有已配置代码仓库的列表。
Basic repository management, for example, activating or deactivating a
repository, is also supported.
基本的代码仓库管理,例如激活或停用代码仓库,也受到支持。
Sources.list
In a sources.list file, each line defines a package repository. The preferred
source must come first. Empty lines are ignored. A # character anywhere on a
line marks the remainder of that line as a comment. The available packages from
a repository are acquired by running apt-get update. Updates can be installed
directly using apt-get, or via the GUI (Node → Updates).
在 sources.list 文件中,每一行定义一个包代码仓库。首选的源必须放在第一位。空行会被忽略。行中任何位置的 # 字符都会将该行剩余部分标记为注释。通过运行 apt-get update 可以获取代码仓库中可用的包。更新可以直接使用 apt-get 安装,或者通过图形界面(节点 → 更新)进行安装。
文件 /etc/apt/sources.list
deb http://deb.debian.org/debian bookworm main contrib deb http://deb.debian.org/debian bookworm-updates main contrib # security updates deb http://security.debian.org/debian-security bookworm-security main contrib
Proxmox VE provides three different package repositories.
Proxmox VE 提供了三种不同的包代码仓库。
3.1.2. Proxmox VE Enterprise Repository
3.1.2. Proxmox VE 企业代码仓库
This is the recommended repository and available for all Proxmox VE subscription
users. It contains the most stable packages and is suitable for production use.
The pve-enterprise repository is enabled by default:
这是推荐使用的代码仓库,适用于所有 Proxmox VE 订阅用户。它包含最稳定的软件包,适合生产环境使用。pve-enterprise 代码仓库默认启用:
文件 /etc/apt/sources.list.d/pve-enterprise.list
deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise
Please note that you need a valid subscription key to access the
pve-enterprise repository. We offer different support levels, which you can
find further details about at https://proxmox.com/en/proxmox-virtual-environment/pricing.
请注意,访问 pve-enterprise 代码仓库需要有效的订阅密钥。我们提供不同的支持级别,详细信息请访问 https://proxmox.com/en/proxmox-virtual-environment/pricing。
|
|
You can disable this repository by commenting out the above line using a
# (at the start of the line). This prevents error messages if your host does
not have a subscription key. Please configure the pve-no-subscription
repository in that case. 您可以通过在上述行前加上 #(行首)来注释该行,从而禁用此代码仓库。如果您的主机没有订阅密钥,这样可以防止出现错误信息。在这种情况下,请配置 pve-no-subscription 代码仓库。 |
3.1.3. Proxmox VE No-Subscription Repository
3.1.3. Proxmox VE 无订阅代码仓库
As the name suggests, you do not need a subscription key to access
this repository. It can be used for testing and non-production
use. It’s not recommended to use this on production servers, as these
packages are not always as heavily tested and validated.
顾名思义,访问此代码仓库无需订阅密钥。它可用于测试和非生产环境。不建议在生产服务器上使用此代码仓库,因为这些软件包并非总是经过严格测试和验证。
We recommend to configure this repository in /etc/apt/sources.list.
我们建议在 /etc/apt/sources.list 中配置此代码仓库。
文件 /etc/apt/sources.list
deb http://ftp.debian.org/debian bookworm main contrib deb http://ftp.debian.org/debian bookworm-updates main contrib # Proxmox VE pve-no-subscription repository provided by proxmox.com, # NOT recommended for production use deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription # security updates deb http://security.debian.org/debian-security bookworm-security main contrib
3.1.4. Proxmox VE Test Repository
3.1.4. Proxmox VE 测试代码仓库
This repository contains the latest packages and is primarily used by developers
to test new features. To configure it, add the following line to
/etc/apt/sources.list:
该代码仓库包含最新的软件包,主要供开发人员测试新功能使用。要配置它,请将以下行添加到 /etc/apt/sources.list:
pvetest 的 sources.list 条目
deb http://download.proxmox.com/debian/pve bookworm pvetest
|
|
The pvetest repository should (as the name implies) only be used for
testing new features or bug fixes. pvetest 代码仓库应(顾名思义)仅用于测试新功能或错误修复。 |
3.1.5. Ceph Squid Enterprise Repository
3.1.5. Ceph Squid 企业代码仓库
This repository holds the enterprise Proxmox VE Ceph 19.2 Squid packages. They are
suitable for production. Use this repository if you run the Ceph client or a
full Ceph cluster on Proxmox VE.
该代码仓库包含企业版 Proxmox VE Ceph 19.2 Squid 软件包。它们适合用于生产环境。如果您在 Proxmox VE 上运行 Ceph 客户端或完整的 Ceph 集群,请使用此代码仓库。
文件 /etc/apt/sources.list.d/ceph.list
deb https://enterprise.proxmox.com/debian/ceph-squid bookworm enterprise
3.1.6. Ceph Squid No-Subscription Repository
3.1.6. Ceph Squid 无订阅代码仓库
This Ceph repository contains the Ceph 19.2 Squid packages before they are moved
to the enterprise repository and after they where on the test repository.
该 Ceph 代码仓库包含 Ceph 19.2 Squid 软件包,这些软件包在被移入企业代码仓库之前,以及在测试代码仓库中之后。
|
|
It’s recommended to use the enterprise repository for production
machines. 建议在生产机器上使用企业代码仓库。 |
文件 /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-squid bookworm no-subscription
3.1.7. Ceph Squid Test Repository
3.1.7. Ceph Squid 测试代码仓库
This Ceph repository contains the Ceph 19.2 Squid packages before they are moved
to the main repository. It is used to test new Ceph releases on Proxmox VE.
该 Ceph 代码仓库包含 Ceph 19.2 Squid 软件包,位于它们被移入主代码仓库之前。它用于在 Proxmox VE 上测试新的 Ceph 版本。
文件 /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-squid bookworm test
3.1.8. Ceph Reef Enterprise Repository
3.1.8. Ceph Reef 企业代码仓库
This repository holds the enterprise Proxmox VE Ceph 18.2 Reef packages. They are
suitable for production. Use this repository if you run the Ceph client or a
full Ceph cluster on Proxmox VE.
此代码仓库包含企业版 Proxmox VE Ceph 18.2 Reef 软件包。它们适用于生产环境。如果您在 Proxmox VE 上运行 Ceph 客户端或完整的 Ceph 集群,请使用此代码仓库。
文件 /etc/apt/sources.list.d/ceph.list
deb https://enterprise.proxmox.com/debian/ceph-reef bookworm enterprise
3.1.9. Ceph Reef No-Subscription Repository
3.1.9. Ceph Reef 无订阅代码仓库
This Ceph repository contains the Ceph 18.2 Reef packages before they are moved
to the enterprise repository and after they where on the test repository.
此 Ceph 代码仓库包含 Ceph 18.2 Reef 软件包,这些软件包在被移至企业代码仓库之前,以及在测试代码仓库中之后。
|
|
It’s recommended to use the enterprise repository for production
machines. 建议在生产机器上使用企业代码仓库。 |
文件 /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-reef bookworm no-subscription
3.1.10. Ceph Reef Test Repository
3.1.10. Ceph Reef 测试代码仓库
This Ceph repository contains the Ceph 18.2 Reef packages before they are moved
to the main repository. It is used to test new Ceph releases on Proxmox VE.
该 Ceph 代码仓库包含 Ceph 18.2 Reef 软件包,位于它们被移入主代码仓库之前。它用于在 Proxmox VE 上测试新的 Ceph 版本。
文件 /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-reef bookworm test
3.1.11. Ceph Quincy Enterprise Repository
3.1.11. Ceph Quincy 企业代码仓库
This repository holds the enterprise Proxmox VE Ceph Quincy packages. They are
suitable for production. Use this repository if you run the Ceph client or a
full Ceph cluster on Proxmox VE.
该代码仓库包含企业版 Proxmox VE Ceph Quincy 软件包。它们适用于生产环境。如果您在 Proxmox VE 上运行 Ceph 客户端或完整的 Ceph 集群,请使用此代码仓库。
文件 /etc/apt/sources.list.d/ceph.list
deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise
3.1.12. Ceph Quincy No-Subscription Repository
3.1.12. Ceph Quincy 无订阅代码仓库
This Ceph repository contains the Ceph Quincy packages before they are moved
to the enterprise repository and after they where on the test repository.
该 Ceph 代码仓库包含 Ceph Quincy 软件包,这些软件包在被移入企业代码仓库之前,以及在测试代码仓库中发布之后。
|
|
It’s recommended to use the enterprise repository for production
machines. 建议在生产机器上使用企业代码仓库。 |
文件 /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
3.1.13. Ceph Quincy Test Repository
3.1.13. Ceph Quincy 测试代码仓库
This Ceph repository contains the Ceph Quincy packages before they are moved
to the main repository. It is used to test new Ceph releases on Proxmox VE.
该 Ceph 代码仓库包含 Ceph Quincy 软件包,位于它们被移入主代码仓库之前。它用于在 Proxmox VE 上测试新的 Ceph 版本。
文件 /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-quincy bookworm test
3.1.14. Older Ceph Repositories
3.1.14. 旧版 Ceph 代码仓库
Proxmox VE 8 doesn’t support Ceph Pacific, Ceph Octopus, or even older releases for
hyper-converged setups. For those releases, you need to first upgrade Ceph to a
newer release before upgrading to Proxmox VE 8.
Proxmox VE 8 不支持 Ceph Pacific、Ceph Octopus,甚至更早版本的超融合设置。对于这些版本,您需要先将 Ceph 升级到较新的版本,然后再升级到 Proxmox VE 8。
See the respective
upgrade guide for details.
有关详细信息,请参阅相应的升级指南。
3.1.15. Debian Firmware Repository
3.1.15. Debian 固件代码仓库
Starting with Debian Bookworm (Proxmox VE 8) non-free firmware (as defined by
DFSG) has been moved to the
newly created Debian repository component non-free-firmware.
从 Debian Bookworm(Proxmox VE 8)开始,非自由固件(由 DFSG 定义)已被移至新创建的 Debian 代码仓库组件 non-free-firmware。
Enable this repository if you want to set up
Early OS Microcode Updates or need additional
Runtime Firmware Files not already
included in the pre-installed package pve-firmware.
如果您想设置早期操作系统微代码更新或需要预装包 pve-firmware 中未包含的额外运行时固件文件,请启用此代码仓库。
To be able to install packages from this component, run
editor /etc/apt/sources.list, append non-free-firmware to the end of each
.debian.org repository line and run apt update.
要能够从此组件安装包,请运行编辑器打开 /etc/apt/sources.list,在每个 .debian.org 代码仓库行末尾添加 non-free-firmware,然后运行 apt update。
3.1.16. SecureApt
The Release files in the repositories are signed with GnuPG. APT is using
these signatures to verify that all packages are from a trusted source.
代码仓库中的发布文件使用 GnuPG 签名。APT 使用这些签名来验证所有包是否来自可信来源。
If you install Proxmox VE from an official ISO image, the key for verification is
already installed.
如果您从官方 ISO 镜像安装 Proxmox VE,验证密钥已经预装。
If you install Proxmox VE on top of Debian, download and install
the key with the following commands:
如果您在 Debian 上安装 Proxmox VE,请使用以下命令下载并安装密钥:
# wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
Verify the checksum afterwards with the sha512sum CLI tool:
随后使用 sha512sum 命令行工具验证校验和:
# sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg 7da6fe34168adc6e479327ba517796d4702fa2f8b4f0a9833f5ea6e6b48f6507a6da403a274fe201595edc86a84463d50383d07f64bdde2e3658108db7d6dc87 /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
or the md5sum CLI tool:
或者使用 md5sum 命令行工具:
# md5sum /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg 41558dc019ef90bd0f6067644a51cf5b /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
3.2. System Software Updates
3.2. 系统软件更新
Proxmox provides updates on a regular basis for all repositories. To install
updates use the web-based GUI or the following CLI commands:
Proxmox 定期为所有软件仓库提供更新。要安装更新,可以使用基于网页的图形界面或以下命令行工具命令:
# apt-get update # apt-get dist-upgrade
|
|
The APT package management system is very flexible and provides many
features, see man apt-get, or [Hertzog13] for additional information. APT 包管理系统非常灵活,提供了许多功能,更多信息请参见 man apt-get 或 [Hertzog13]。 |
|
|
Regular updates are essential to get the latest patches and security
related fixes. Major system upgrades are announced in the Proxmox VE Community Forum. 定期更新对于获取最新的补丁和安全修复至关重要。重大系统升级会在 Proxmox VE 社区论坛中发布公告。 |
3.3. Firmware Updates 3.3. 固件更新
Firmware updates from this chapter should be applied when running Proxmox VE on a
bare-metal server. Whether configuring firmware updates is appropriate within
guests, e.g. when using device pass-through, depends strongly on your setup and
is therefore out of scope.
本章介绍的固件更新应在裸机服务器上运行 Proxmox VE 时应用。是否在虚拟机内配置固件更新,例如使用设备直通时,取决于您的具体设置,因此不在本章讨论范围内。
In addition to regular software updates, firmware updates are also important
for reliable and secure operation.
除了常规的软件更新外,固件更新对于系统的可靠性和安全性也非常重要。
When obtaining and applying firmware updates, a combination of available options
is recommended to get them as early as possible or at all.
在获取和应用固件更新时,建议结合多种可用选项,以尽早或确保能够获得更新。
The term firmware is usually divided linguistically into microcode (for CPUs)
and firmware (for other devices).
术语固件通常在语言上分为微代码(用于 CPU)和固件(用于其他设备)。
3.3.1. Persistent Firmware
3.3.1. 持久固件
This section is suitable for all devices. Updated microcode, which is usually
included in a BIOS/UEFI update, is stored on the motherboard, whereas other
firmware is stored on the respective device. This persistent method is
especially important for the CPU, as it enables the earliest possible regular
loading of the updated microcode at boot time.
本节适用于所有设备。更新的微代码通常包含在 BIOS/UEFI 更新中,存储在主板上,而其他固件则存储在各自的设备上。这种持久方法对于 CPU 尤为重要,因为它使得在启动时能够尽早且定期加载更新的微代码成为可能。
|
|
With some updates, such as for BIOS/UEFI or storage controller, the
device configuration could be reset. Please follow the vendor’s instructions
carefully and back up the current configuration. 对于某些更新,如 BIOS/UEFI 或存储控制器,设备配置可能会被重置。请仔细遵循厂商的说明,并备份当前配置。 |
Please check with your vendor which update methods are available.
请向您的供应商确认可用的更新方法。
-
Convenient update methods for servers can include Dell’s Lifecycle Manager or Service Packs from HPE.
服务器的便捷更新方法可以包括戴尔的生命周期管理器或惠普企业的服务包。 -
Sometimes there are Linux utilities available as well. Examples are mlxup for NVIDIA ConnectX or bnxtnvm/niccli for Broadcom network cards.
有时也会有可用的 Linux 工具。例如,NVIDIA ConnectX 的 mlxup 或博通网卡的 bnxtnvm/niccli。 -
LVFS is also an option if there is a cooperation with the hardware vendor and supported hardware in use. The technical requirement for this is that the system was manufactured after 2014 and is booted via UEFI.
如果与硬件供应商有合作且使用支持的硬件,LVFS 也是一个选项。技术要求是系统制造于 2014 年以后,并通过 UEFI 启动。
Proxmox VE ships its own version of the fwupd package to enable Secure Boot
Support with the Proxmox signing key. This package consciously dropped the
dependency recommendation for the udisks2 package, due to observed issues with
its use on hypervisors. That means you must explicitly configure the correct
mount point of the EFI partition in /etc/fwupd/daemon.conf, for example:
Proxmox VE 自带了自己的 fwupd 包版本,以启用使用 Proxmox 签名密钥的安全启动支持。该包有意取消了对 udisks2 包的依赖推荐,因为在使用该包于虚拟机监控器时观察到了问题。这意味着你必须在 /etc/fwupd/daemon.conf 中显式配置 EFI 分区的正确挂载点,例如:
文件 /etc/fwupd/daemon.conf
# Override the location used for the EFI system partition (ESP) path. EspLocation=/boot/efi
|
|
If the update instructions require a host reboot, make sure that it can be
done safely. See also Node Maintenance. 如果更新说明要求主机重启,请确保可以安全地进行重启。另请参见节点维护。 |
3.3.2. Runtime Firmware Files
3.3.2. 运行时固件文件
This method stores firmware on the Proxmox VE operating system and will pass it to a
device if its persisted firmware is less
recent. It is supported by devices such as network and graphics cards, but not
by those that rely on persisted firmware such as the motherboard and hard disks.
这种方法将固件存储在 Proxmox VE 操作系统中,并在设备的持久固件版本较旧时传递给设备。它被网络和显卡等设备支持,但不支持依赖持久固件的设备,如主板和硬盘。
In Proxmox VE the package pve-firmware is already installed by default. Therefore,
with the normal system updates (APT), included
firmware of common hardware is automatically kept up to date.
在 Proxmox VE 中,pve-firmware 包默认已安装。因此,通过正常的系统更新(APT),常见硬件的内置固件会自动保持最新。
An additional Debian Firmware Repository
exists, but is not configured by default.
存在一个额外的 Debian 固件代码仓库,但默认未配置。
If you try to install an additional firmware package but it conflicts, APT will
abort the installation. Perhaps the particular firmware can be obtained in
another way.
如果尝试安装额外的固件包但发生冲突,APT 将中止安装。也许可以通过其他方式获取该特定固件。
3.3.3. CPU Microcode Updates
3.3.3. CPU 微代码更新
Microcode updates are intended to fix found security vulnerabilities and other
serious CPU bugs. While the CPU performance can be affected, a patched microcode
is usually still more performant than an unpatched microcode where the kernel
itself has to do mitigations. Depending on the CPU type, it is possible that
performance results of the flawed factory state can no longer be achieved
without knowingly running the CPU in an unsafe state.
微代码更新旨在修复已发现的安全漏洞和其他严重的 CPU 缺陷。虽然 CPU 性能可能会受到影响,但经过修补的微代码通常仍比未修补的微代码性能更好,因为后者需要内核本身进行缓解。根据 CPU 类型,可能无法在不知情地让 CPU 处于不安全状态的情况下,达到有缺陷的出厂状态的性能水平。
To get an overview of present CPU vulnerabilities and their mitigations, run
lscpu. Current real-world known vulnerabilities can only show up if the
Proxmox VE host is up to date, its version not
end of life, and has at least been rebooted since the
last kernel update.
要了解当前存在的 CPU 漏洞及其缓解措施,请运行 lscpu。当前已知的实际漏洞只有在 Proxmox VE 主机保持最新状态、版本未达到生命周期终止且至少在上次内核更新后重启过,才会显示出来。
Besides the recommended microcode update via
persistent BIOS/UEFI updates, there is also
an independent method via Early OS Microcode Updates. It is convenient to use
and also quite helpful when the motherboard vendor no longer provides BIOS/UEFI
updates. Regardless of the method in use, a reboot is always needed to apply a
microcode update.
除了通过持久的 BIOS/UEFI 更新进行推荐的微代码更新外,还有一种独立的方法,即通过早期操作系统微代码更新。这种方法使用方便,当主板厂商不再提供 BIOS/UEFI 更新时也非常有用。无论使用哪种方法,应用微代码更新都需要重启。
Set up Early OS Microcode Updates
设置早期操作系统微代码更新
To set up microcode updates that are applied early on boot by the Linux kernel,
you need to:
要设置由 Linux 内核在启动早期应用的微代码更新,您需要:
-
Enable the Debian Firmware Repository
启用 Debian 固件代码仓库 -
Get the latest available packages apt update (or use the web interface, under Node → Updates)
获取最新可用的软件包 apt update(或使用网页界面,在节点 → 更新下) -
Install the CPU-vendor specific microcode package:
安装特定于 CPU 厂商的微代码包:-
For Intel CPUs: apt install intel-microcode
对于 Intel CPU:apt install intel-microcode -
For AMD CPUs: apt install amd64-microcode
对于 AMD CPU:apt install amd64-microcode
-
-
Reboot the Proxmox VE host
重启 Proxmox VE 主机
Any future microcode update will also require a reboot to be loaded.
任何未来的微代码更新也需要重启才能加载。
Microcode Version 微代码版本
To get the current running microcode revision for comparison or debugging
purposes:
要获取当前运行的微代码版本以进行比较或调试:
# grep microcode /proc/cpuinfo | uniq microcode : 0xf0
A microcode package has updates for many different CPUs. But updates
specifically for your CPU might not come often. So, just looking at the date on
the package won’t tell you when the company actually released an update for your
specific CPU.
一个微代码包包含针对许多不同 CPU 的更新。但针对您 CPU 的更新可能不会经常发布。因此,仅查看包上的日期无法告诉您公司何时实际发布了针对您特定 CPU 的更新。
If you’ve installed a new microcode package and rebooted your Proxmox VE host, and
this new microcode is newer than both, the version baked into the CPU and the
one from the motherboard’s firmware, you’ll see a message in the system log
saying "microcode updated early".
如果您安装了新的微代码包并重启了您的 Proxmox VE 主机,并且这个新的微代码版本比 CPU 内置的版本和主板固件中的版本都要新,您将在系统日志中看到一条消息,显示“microcode updated early”。
# dmesg | grep microcode [ 0.000000] microcode: microcode updated early to revision 0xf0, date = 2021-11-12 [ 0.896580] microcode: Microcode Update Driver: v2.2.
Troubleshooting 故障排除
For debugging purposes, the set up Early OS Microcode Update applied regularly
at system boot can be temporarily disabled as follows:
出于调试目的,系统启动时定期应用的早期操作系统微代码更新可以暂时禁用,方法如下:
-
make sure that the host can be rebooted safely
确保主机可以安全重启 -
reboot the host to get to the GRUB menu (hold SHIFT if it is hidden)
重启主机以进入 GRUB 菜单(如果菜单隐藏,按住 SHIFT 键) -
at the desired Proxmox VE boot entry press E
在所需的 Proxmox VE 启动项上按 E 键 -
go to the line which starts with linux and append separated by a space dis_ucode_ldr
找到以 linux 开头的那一行,在其后添加一个空格,然后输入 dis_ucode_ldr -
press CTRL-X to boot this time without an Early OS Microcode Update
按 CTRL-X 键,这次将不进行早期操作系统微码更新启动
If a problem related to a recent microcode update is suspected, a package
downgrade should be considered instead of package removal
(apt purge <intel-microcode|amd64-microcode>). Otherwise, a too old
persisted microcode might be loaded, even
though a more recent one would run without problems.
如果怀疑最近的微代码更新引起了问题,应考虑降级包而不是删除包(apt purge <intel-microcode|amd64-microcode>)。否则,可能会加载一个过旧的持久化微代码,尽管更新的微代码可以正常运行。
A downgrade is possible if an earlier microcode package version is
available in the Debian repository, as shown in this example:
如果 Debian 代码仓库中有早期的微代码包版本,则可以降级,如下例所示:
# apt list -a intel-microcode Listing... Done intel-microcode/stable-security,now 3.20230808.1~deb12u1 amd64 [installed] intel-microcode/stable 3.20230512.1 amd64
# apt install intel-microcode=3.202305* ... Selected version '3.20230512.1' (Debian:12.1/stable [amd64]) for 'intel-microcode' ... dpkg: warning: downgrading intel-microcode from 3.20230808.1~deb12u1 to 3.20230512.1 ... intel-microcode: microcode will be updated at next boot ...
Make sure (again) that the host can be rebooted
safely. To apply an older microcode
potentially included in the microcode package for your CPU type, reboot now.
请再次确保主机可以安全重启。要应用可能包含在针对您 CPU 类型的微代码包中的较旧微代码,请立即重启。
|
|
It makes sense to hold the downgraded package for a while and try more recent
versions again at a later time. Even if the package version is the same in the
future, system updates may have fixed the experienced problem in the meantime. # apt-mark hold intel-microcode intel-microcode set on hold. # apt-mark unhold intel-microcode # apt update # apt upgrade |
3.4. Network Configuration
3.4. 网络配置
Proxmox VE is using the Linux network stack. This provides a lot of flexibility on
how to set up the network on the Proxmox VE nodes. The configuration can be done
either via the GUI, or by manually editing the file /etc/network/interfaces,
which contains the whole network configuration. The interfaces(5) manual
page contains the complete format description. All Proxmox VE tools try hard to keep
direct user modifications, but using the GUI is still preferable, because it
protects you from errors.
Proxmox VE 使用 Linux 网络栈。这为如何在 Proxmox VE 节点上设置网络提供了很大的灵活性。配置可以通过图形界面完成,也可以通过手动编辑包含完整网络配置的 /etc/network/interfaces 文件来完成。interfaces(5) 手册页包含完整的格式说明。所有 Proxmox VE 工具都尽力保留用户的直接修改,但仍建议使用图形界面,因为它能防止错误。
A Linux bridge interface (commonly called vmbrX) is needed to connect guests
to the underlying physical network. It can be thought of as a virtual switch
which the guests and physical interfaces are connected to. This section provides
some examples on how the network can be set up to accommodate different use cases
like redundancy with a bond,
vlans or
routed and
NAT setups.
需要一个 Linux 桥接接口(通常称为 vmbrX)来将虚拟机连接到底层的物理网络。它可以被看作是一个虚拟交换机,虚拟机和物理接口都连接到它。本节提供了一些示例,说明如何设置网络以满足不同的使用场景,如带有绑定的冗余、VLAN 或路由和 NAT 设置。
The Software Defined Network is an option for more complex
virtual networks in Proxmox VE clusters.
软件定义网络是 Proxmox VE 集群中用于更复杂虚拟网络的一个选项。
|
|
It’s discouraged to use the traditional Debian tools ifup and ifdown
if unsure, as they have some pitfalls like interrupting all guest traffic on
ifdown vmbrX but not reconnecting those guest again when doing ifup on the
same bridge later. 如果不确定,建议不要使用传统的 Debian 工具 ifup 和 ifdown,因为它们存在一些缺陷,比如在对 vmbrX 执行 ifdown 时会中断所有来宾流量,但在随后对同一桥接执行 ifup 时不会重新连接这些来宾。 |
3.4.1. Apply Network Changes
3.4.1. 应用网络更改
Proxmox VE does not write changes directly to /etc/network/interfaces. Instead, we
write into a temporary file called /etc/network/interfaces.new, this way you
can do many related changes at once. This also allows to ensure your changes
are correct before applying, as a wrong network configuration may render a node
inaccessible.
Proxmox VE 不会直接将更改写入 /etc/network/interfaces。相反,我们会写入一个名为 /etc/network/interfaces.new 的临时文件,这样你可以一次性进行多项相关更改。这也能确保在应用之前你的更改是正确的,因为错误的网络配置可能导致节点无法访问。
Live-Reload Network with ifupdown2
使用 ifupdown2 实时重载网络
With the recommended ifupdown2 package (default for new installations since
Proxmox VE 7.0), it is possible to apply network configuration changes without a
reboot. If you change the network configuration via the GUI, you can click the
Apply Configuration button. This will move changes from the staging
interfaces.new file to /etc/network/interfaces and apply them live.
使用推荐的 ifupdown2 包(自 Proxmox VE 7.0 起新安装的默认包),可以在不重启的情况下应用网络配置更改。如果您通过 GUI 更改网络配置,可以点击“应用配置”按钮。这将把更改从暂存的 interfaces.new 文件移动到 /etc/network/interfaces 并实时应用。
If you made manual changes directly to the /etc/network/interfaces file, you
can apply them by running ifreload -a
如果您直接手动修改了 /etc/network/interfaces 文件,可以通过运行 ifreload -a 来应用更改。
|
|
If you installed Proxmox VE on top of Debian, or upgraded to Proxmox VE 7.0 from an
older Proxmox VE installation, make sure ifupdown2 is installed: apt install
ifupdown2 如果您是在 Debian 上安装的 Proxmox VE,或从旧版本的 Proxmox VE 升级到 Proxmox VE 7.0,请确保已安装 ifupdown2:apt install ifupdown2 |
Reboot Node to Apply
重启节点以应用
Another way to apply a new network configuration is to reboot the node.
In that case the systemd service pvenetcommit will activate the staging
interfaces.new file before the networking service will apply that
configuration.
另一种应用新的网络配置的方法是重启节点。在这种情况下,systemd 服务 pvenetcommit 会在网络服务应用该配置之前激活 staging interfaces.new 文件。
3.4.2. Naming Conventions
3.4.2. 命名约定
We currently use the following naming conventions for device names:
我们目前使用以下设备名称命名约定:
-
Ethernet devices: en*, systemd network interface names. This naming scheme is used for new Proxmox VE installations since version 5.0.
以太网设备:en*,systemd 网络接口名称。该命名方案自 Proxmox VE 5.0 版本起用于新的安装。 -
Ethernet devices: eth[N], where 0 ≤ N (eth0, eth1, …) This naming scheme is used for Proxmox VE hosts which were installed before the 5.0 release. When upgrading to 5.0, the names are kept as-is.
以太网设备:eth[N],其中 0 ≤ N(eth0,eth1,…)此命名方案用于在 5.0 版本发布之前安装的 Proxmox VE 主机。升级到 5.0 时,名称保持不变。 -
Bridge names: Commonly vmbr[N], where 0 ≤ N ≤ 4094 (vmbr0 - vmbr4094), but you can use any alphanumeric string that starts with a character and is at most 10 characters long.
桥接名称:通常为 vmbr[N],其中 0 ≤ N ≤ 4094(vmbr0 - vmbr4094),但您可以使用任何以字母开头且最长不超过 10 个字符的字母数字字符串。 -
Bonds: bond[N], where 0 ≤ N (bond0, bond1, …)
绑定接口:bond[N],其中 0 ≤ N(bond0,bond1,…) -
VLANs: Simply add the VLAN number to the device name, separated by a period (eno1.50, bond1.30)
VLAN:只需在设备名称后添加 VLAN 编号,中间用点分隔(eno1.50,bond1.30)
This makes it easier to debug networks problems, because the device
name implies the device type.
这使得调试网络问题更加容易,因为设备名称暗示了设备类型。
Systemd Network Interface Names
Systemd 网络接口名称
Systemd defines a versioned naming scheme for network device names. The
scheme uses the two-character prefix en for Ethernet network devices. The
next characters depends on the device driver, device location and other
attributes. Some possible patterns are:
Systemd 定义了一个带版本的网络设备命名方案。该方案使用两个字符的前缀 en 表示以太网设备。接下来的字符取决于设备驱动、设备位置和其他属性。一些可能的模式包括:
-
o<index>[n<phys_port_name>|d<dev_port>] — devices on board
o<index>[n<phys_port_name>|d<dev_port>] — 板载设备 -
s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by hotplug id
s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — 通过热插拔 ID 识别的设备 -
[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — 通过总线 ID 识别的设备 -
x<MAC> — devices by MAC address
x<MAC> — 通过 MAC 地址识别的设备
Some examples for the most common patterns are:
以下是一些最常见模式的示例:
-
eno1 — is the first on-board NIC
eno1 — 是第一个板载网卡 -
enp3s0f1 — is function 1 of the NIC on PCI bus 3, slot 0
enp3s0f1 — 是 PCI 总线 3、插槽 0 上的网卡的功能 1
For a full list of possible device name patterns, see the
systemd.net-naming-scheme(7) manpage.
有关可能的设备名称模式的完整列表,请参见 systemd.net-naming-scheme(7)手册页。
A new version of systemd may define a new version of the network device naming
scheme, which it then uses by default. Consequently, updating to a newer
systemd version, for example during a major Proxmox VE upgrade, can change the names
of network devices and require adjusting the network configuration. To avoid
name changes due to a new version of the naming scheme, you can manually pin a
particular naming scheme version (see
below).
systemd 的新版本可能会定义网络设备命名方案的新版本,并默认使用该方案。因此,升级到较新的 systemd 版本,例如在进行重大 Proxmox VE 升级时,可能会更改网络设备的名称,并需要调整网络配置。为了避免因命名方案新版本而导致的名称更改,您可以手动固定特定的命名方案版本(见下文)。
However, even with a pinned naming scheme version, network device names can
still change due to kernel or driver updates. In order to avoid name changes
for a particular network device altogether, you can manually override its name
using a link file (see below).
然而,即使使用了固定的命名方案版本,网络设备名称仍可能因内核或驱动程序更新而发生变化。为了完全避免特定网络设备名称的更改,您可以使用链接文件手动覆盖其名称(见下文)。
For more information on network interface names, see
Predictable Network Interface
Names.
有关网络接口名称的更多信息,请参见可预测的网络接口名称。
Pinning a specific naming scheme version
固定特定的命名方案版本
You can pin a specific version of the naming scheme for network devices by
adding the net.naming-scheme=<version> parameter to the
kernel command line. For a list of naming
scheme versions, see the
systemd.net-naming-scheme(7) manpage.
您可以通过在内核命令行中添加 net.naming-scheme=<version> 参数来固定网络设备的特定命名方案版本。有关命名方案版本的列表,请参见 systemd.net-naming-scheme(7) 手册页。
For example, to pin the version v252, which is the latest naming scheme
version for a fresh Proxmox VE 8.0 installation, add the following kernel
command-line parameter:
例如,要固定版本 v252,这是 Proxmox VE 8.0 全新安装的最新命名方案版本,请添加以下内核命令行参数:
net.naming-scheme=v252
See also this section on editing the kernel
command line. You need to reboot for the changes to take effect.
另请参见本节关于编辑内核命令行的内容。您需要重启系统以使更改生效。
Overriding network device names
覆盖网络设备名称
You can manually assign a name to a particular network device using a custom
systemd.link
file. This overrides the name that would be assigned according to the latest
network device naming scheme. This way, you can avoid naming changes due to
kernel updates, driver updates or newer versions of the naming scheme.
您可以使用自定义的 systemd.link 文件手动为特定网络设备分配名称。这将覆盖根据最新网络设备命名方案分配的名称。通过这种方式,您可以避免因内核更新、驱动更新或命名方案新版本而导致的名称变化。
Custom link files should be placed in /etc/systemd/network/ and named
<n>-<id>.link, where n is a priority smaller than 99 and id is some
identifier. A link file has two sections: [Match] determines which interfaces
the file will apply to; [Link] determines how these interfaces should be
configured, including their naming.
自定义链接文件应放置在 /etc/systemd/network/ 目录下,命名格式为 <n>-<id>.link,其中 n 是小于 99 的优先级,id 是某个标识符。链接文件包含两个部分:[Match] 用于确定该文件适用于哪些接口;[Link] 用于确定这些接口应如何配置,包括它们的命名。
To assign a name to a particular network device, you need a way to uniquely and
permanently identify that device in the [Match] section. One possibility is
to match the device’s MAC address using the MACAddress option, as it is
unlikely to change.
要为特定的网络设备分配名称,需要在 [Match] 部分有一种方法来唯一且永久地识别该设备。一种可能是使用 MACAddress 选项匹配设备的 MAC 地址,因为它不太可能改变。
The [Match] section should also contain a Type option to make sure it only
matches the expected physical interface, and not bridge/bond/VLAN interfaces
with the same MAC address. In most setups, Type should be set to ether to
match only Ethernet devices, but some setups may require other choices. See the
systemd.link(5)
manpage for more details.
[Match] 部分还应包含 Type 选项,以确保它只匹配预期的物理接口,而不是具有相同 MAC 地址的桥接/绑定/VLAN 接口。在大多数设置中,Type 应设置为 ether,以仅匹配以太网设备,但某些设置可能需要其他选项。更多细节请参见 systemd.link(5) 手册页。
Then, you can assign a name using the Name option in the [Link] section.
然后,可以在 [Link] 部分使用 Name 选项分配名称。
Link files are copied to the initramfs, so it is recommended to refresh the
initramfs after adding, modifying, or removing a link file:
链接文件会被复制到 initramfs 中,因此建议在添加、修改或删除链接文件后刷新 initramfs:
# update-initramfs -u -k all
For example, to assign the name enwan0 to the Ethernet device with MAC
address aa:bb:cc:dd:ee:ff, create a file
/etc/systemd/network/10-enwan0.link with the following contents:
例如,要将 MAC 地址为 aa:bb:cc:dd:ee:ff 的以太网设备命名为 enwan0,可以创建一个文件/etc/systemd/network/10-enwan0.link,内容如下:
[Match] MACAddress=aa:bb:cc:dd:ee:ff Type=ether [Link] Name=enwan0
Do not forget to adjust /etc/network/interfaces to use the new name, and
refresh your initramfs as described above. You need to reboot the node for
the change to take effect.
别忘了调整/etc/network/interfaces 以使用新名称,并按照上述方法刷新 initramfs。需要重启节点以使更改生效。
|
|
It is recommended to assign a name starting with en or eth so that
Proxmox VE recognizes the interface as a physical network device which can then be
configured via the GUI. Also, you should ensure that the name will not clash
with other interface names in the future. One possibility is to assign a name
that does not match any name pattern that systemd uses for network interfaces
(see above), such as enwan0 in the
example above. 建议分配以 en 或 eth 开头的名称,这样 Proxmox VE 会将该接口识别为物理网络设备,从而可以通过 GUI 进行配置。同时,应确保该名称将来不会与其他接口名称冲突。一种方法是分配一个不符合 systemd 用于网络接口的任何名称模式的名称(见上文),例如上述示例中的 enwan0。 |
For more information on link files, see the
systemd.link(5)
manpage.
有关链接文件的更多信息,请参见 systemd.link(5) 手册页。
3.4.3. Choosing a network configuration
3.4.3. 选择网络配置
Depending on your current network organization and your resources you can
choose either a bridged, routed, or masquerading networking setup.
根据您当前的网络组织和资源,您可以选择桥接、路由或伪装网络设置。
Proxmox VE server in a private LAN, using an external gateway to reach the internet
Proxmox VE 服务器位于私有局域网中,使用外部网关访问互联网
The Bridged model makes the most sense in this case, and this is also
the default mode on new Proxmox VE installations.
Each of your Guest system will have a virtual interface attached to the
Proxmox VE bridge. This is similar in effect to having the Guest network card
directly connected to a new switch on your LAN, the Proxmox VE host playing the role
of the switch.
在这种情况下,桥接模型最为合理,这也是新安装的 Proxmox VE 的默认模式。每个客户机系统都会有一个虚拟接口连接到 Proxmox VE 桥。这在效果上类似于将客户机的网卡直接连接到局域网中的一个新交换机,Proxmox VE 主机则扮演交换机的角色。
Proxmox VE server at hosting provider, with public IP ranges for Guests
托管服务提供商处的 Proxmox VE 服务器,客户机使用公共 IP 地址段
For this setup, you can use either a Bridged or Routed model, depending on
what your provider allows.
对于此设置,您可以根据提供商的允许情况使用桥接模型或路由模型。
Proxmox VE server at hosting provider, with a single public IP address
托管服务提供商处的 Proxmox VE 服务器,使用单个公共 IP 地址
In that case the only way to get outgoing network accesses for your guest
systems is to use Masquerading. For incoming network access to your guests,
you will need to configure Port Forwarding.
在这种情况下,获取来宾系统的外发网络访问的唯一方法是使用伪装。对于来宾的入站网络访问,您需要配置端口转发。
For further flexibility, you can configure
VLANs (IEEE 802.1q) and network bonding, also known as "link
aggregation". That way it is possible to build complex and flexible
virtual networks.
为了获得更大的灵活性,您可以配置 VLAN(IEEE 802.1q)和网络绑定,也称为“链路聚合”。这样就可以构建复杂且灵活的虚拟网络。
3.4.4. Default Configuration using a Bridge
3.4.4. 使用桥接的默认配置
Bridges are like physical network switches implemented in software.
All virtual guests can share a single bridge, or you can create multiple
bridges to separate network domains. Each host can have up to 4094 bridges.
桥接就像软件实现的物理网络交换机。所有虚拟来宾都可以共享一个桥接,或者您可以创建多个桥接以分隔网络域。每个主机最多可以拥有 4094 个桥接。
The installation program creates a single bridge named vmbr0, which
is connected to the first Ethernet card. The corresponding
configuration in /etc/network/interfaces might look like this:
安装程序会创建一个名为 vmbr0 的单一桥接,该桥接连接到第一块以太网卡。对应的配置文件 /etc/network/interfaces 可能如下所示:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.10.2/24
gateway 192.168.10.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
Virtual machines behave as if they were directly connected to the
physical network. The network, in turn, sees each virtual machine as
having its own MAC, even though there is only one network cable
connecting all of these VMs to the network.
虚拟机的行为就像它们直接连接到物理网络一样。网络则将每台虚拟机视为拥有自己的 MAC 地址,尽管实际上只有一根网线将所有这些虚拟机连接到网络。
3.4.5. Routed Configuration
3.4.5. 路由配置
Most hosting providers do not support the above setup. For security
reasons, they disable networking as soon as they detect multiple MAC
addresses on a single interface.
大多数托管服务提供商不支持上述设置。出于安全原因,一旦检测到单个接口上存在多个 MAC 地址,他们会禁用网络连接。
|
|
Some providers allow you to register additional MACs through their
management interface. This avoids the problem, but can be clumsy to
configure because you need to register a MAC for each of your VMs. 一些提供商允许您通过其管理界面注册额外的 MAC 地址。这可以避免问题,但配置起来可能比较麻烦,因为您需要为每个虚拟机注册一个 MAC 地址。 |
You can avoid the problem by “routing” all traffic via a single
interface. This makes sure that all network packets use the same MAC
address.
您可以通过“路由”所有流量通过单一接口来避免该问题。这确保所有网络数据包使用相同的 MAC 地址。
A common scenario is that you have a public IP (assume 198.51.100.5
for this example), and an additional IP block for your VMs
(203.0.113.16/28). We recommend the following setup for such
situations:
一个常见的场景是,您拥有一个公网 IP(本例假设为 198.51.100.5),以及一个用于虚拟机的额外 IP 段(203.0.113.16/28)。我们建议在此类情况下采用以下设置:
auto lo
iface lo inet loopback
auto eno0
iface eno0 inet static
address 198.51.100.5/29
gateway 198.51.100.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
auto vmbr0
iface vmbr0 inet static
address 203.0.113.17/28
bridge-ports none
bridge-stp off
bridge-fd 0
3.4.6. Masquerading (NAT) with iptables
3.4.6. 使用 iptables 进行伪装(NAT)
Masquerading allows guests having only a private IP address to access the
network by using the host IP address for outgoing traffic. Each outgoing
packet is rewritten by iptables to appear as originating from the host,
and responses are rewritten accordingly to be routed to the original sender.
伪装允许仅拥有私有 IP 地址的客户机通过使用主机 IP 地址进行外发流量来访问网络。每个外发的数据包都会被 iptables 重写,使其看起来像是来自主机,响应也会相应地被重写,以便路由回原始发送者。
auto lo
iface lo inet loopback
auto eno1
#real IP address
iface eno1 inet static
address 198.51.100.5/24
gateway 198.51.100.1
auto vmbr0
#private sub network
iface vmbr0 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
|
|
In some masquerade setups with firewall enabled, conntrack zones might be
needed for outgoing connections. Otherwise the firewall could block outgoing
connections since they will prefer the POSTROUTING of the VM bridge (and not
MASQUERADE). 在启用防火墙的某些伪装设置中,可能需要为外发连接配置 conntrack 区域。否则,防火墙可能会阻止外发连接,因为它们会优先选择虚拟机桥接的 POSTROUTING(而非伪装 MASQUERADE)。 |
Adding these lines in the /etc/network/interfaces can fix this problem:
在 /etc/network/interfaces 中添加以下几行可以解决此问题:
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
For more information about this, refer to the following links:
有关更多信息,请参阅以下链接:
3.4.7. Linux Bond 3.4.7. Linux 绑定
Bonding (also called NIC teaming or Link Aggregation) is a technique
for binding multiple NIC’s to a single network device. It is possible
to achieve different goals, like make the network fault-tolerant,
increase the performance or both together.
绑定(也称为 NIC 团队或链路聚合)是一种将多个网卡绑定到单个网络设备的技术。它可以实现不同的目标,比如使网络具备容错能力、提高性能,或者两者兼顾。
High-speed hardware like Fibre Channel and the associated switching
hardware can be quite expensive. By doing link aggregation, two NICs
can appear as one logical interface, resulting in double speed. This
is a native Linux kernel feature that is supported by most
switches. If your nodes have multiple Ethernet ports, you can
distribute your points of failure by running network cables to
different switches and the bonded connection will failover to one
cable or the other in case of network trouble.
高速硬件如光纤通道及其相关交换硬件可能相当昂贵。通过链路聚合,两个网卡可以表现为一个逻辑接口,从而实现双倍速度。这是 Linux 内核的原生功能,大多数交换机都支持。如果你的节点有多个以太网端口,可以通过将网络线缆连接到不同的交换机来分散故障点,在网络出现问题时,绑定连接会自动切换到另一条线缆。
Aggregated links can improve live-migration delays and improve the
speed of replication of data between Proxmox VE Cluster nodes.
聚合链路可以改善实时迁移的延迟,并提高 Proxmox VE 集群节点之间数据复制的速度。
There are 7 modes for bonding:
绑定有 7 种模式:
-
Round-robin (balance-rr): Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.
轮询(balance-rr):按顺序从第一个可用的网络接口(NIC)从属设备传输网络数据包,直到最后一个。此模式提供负载均衡和容错能力。 -
Active-backup (active-backup): Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
主动备份(active-backup):绑定中的只有一个 NIC 从属设备处于活动状态。只有当活动从属设备发生故障时,另一个从属设备才会变为活动状态。单个逻辑绑定接口的 MAC 地址仅在一个 NIC(端口)上对外可见,以避免网络交换机中的混乱。此模式提供容错能力。 -
XOR (balance-xor): Transmit network packets based on [(source MAC address XOR’d with destination MAC address) modulo NIC slave count]. This selects the same NIC slave for each destination MAC address. This mode provides load balancing and fault tolerance.
异或(balance-xor):基于[(源 MAC 地址与目标 MAC 地址进行异或运算)对 NIC 从属设备数量取模]来传输网络数据包。这样每个目标 MAC 地址都会选择相同的 NIC 从属设备。此模式提供负载均衡和容错能力。 -
Broadcast (broadcast): Transmit network packets on all slave network interfaces. This mode provides fault tolerance.
广播(broadcast):在所有从属网络接口上发送网络数据包。此模式提供容错能力。 -
IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP): Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification.
IEEE 802.3ad 动态链路聚合(802.3ad)(LACP):创建具有相同速度和双工设置的聚合组。根据 802.3ad 规范,利用活动聚合组中的所有从属网络接口。 -
Adaptive transmit load balancing (balance-tlb): Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
自适应传输负载均衡(balance-tlb):Linux 绑定驱动模式,不需要任何特殊的网络交换机支持。传出的网络数据包流量根据每个从属网络接口当前的负载(相对于速度计算)进行分配。传入流量由当前指定的一个从属网络接口接收。如果该接收从属接口失败,另一个从属接口将接管失败接收从属接口的 MAC 地址。 -
Adaptive load balancing (balance-alb): Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
自适应负载均衡(balance-alb):包括 balance-tlb 以及针对 IPV4 流量的接收负载均衡(rlb),且不需要任何特殊的网络交换机支持。接收负载均衡通过 ARP 协商实现。绑定驱动会拦截本地系统发出的 ARP 回复,并将源硬件地址覆盖为单个逻辑绑定接口中某个 NIC 从属设备的唯一硬件地址,从而使不同的网络对端使用不同的 MAC 地址进行网络数据包通信。
If your switch supports the LACP (IEEE 802.3ad) protocol, then we recommend
using the corresponding bonding mode (802.3ad). Otherwise you should generally
use the active-backup mode.
如果您的交换机支持 LACP(IEEE 802.3ad)协议,我们建议使用相应的绑定模式(802.3ad)。否则,通常应使用主动-备份模式。
For the cluster network (Corosync) we recommend configuring it with multiple
networks. Corosync does not need a bond for network redundancy as it can switch
between networks by itself, if one becomes unusable.
对于集群网络(Corosync),我们建议配置多个网络。Corosync 不需要绑定来实现网络冗余,因为它可以在网络不可用时自行切换网络。
The following bond configuration can be used as distributed/shared
storage network. The benefit would be that you get more speed and the
network will be fault-tolerant.
以下绑定配置可用作分布式/共享存储网络。其优点是可以获得更高的速度且网络具备容错能力。
示例:使用带固定 IP 地址的绑定接口
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
auto bond0
iface bond0 inet static
bond-slaves eno1 eno2
address 192.168.1.2/24
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2/24
gateway 10.10.10.1
bridge-ports eno3
bridge-stp off
bridge-fd 0
Another possibility is to use the bond directly as the bridge port.
This can be used to make the guest network fault-tolerant.
另一种可能是直接将绑定接口用作桥接端口。这可以用来使客户机网络具备容错能力。
示例:将绑定接口用作桥接端口
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2/24
gateway 10.10.10.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
3.4.8. VLAN 802.1Q
A virtual LAN (VLAN) is a broadcast domain that is partitioned and
isolated in the network at layer two. So it is possible to have
multiple networks (4096) in a physical network, each independent of
the other ones.
虚拟局域网(VLAN)是在网络第二层被划分和隔离的广播域。因此,在一个物理网络中可以存在多个(4096 个)相互独立的网络。
Each VLAN network is identified by a number often called tag.
Network packages are then tagged to identify which virtual network
they belong to.
每个 VLAN 网络由一个数字标识,通常称为标签。网络数据包会被打上标签,以识别它们属于哪个虚拟网络。
VLAN for Guest Networks
访客网络的 VLAN
Proxmox VE supports this setup out of the box. You can specify the VLAN tag
when you create a VM. The VLAN tag is part of the guest network
configuration. The networking layer supports different modes to
implement VLANs, depending on the bridge configuration:
Proxmox VE 开箱即用地支持此设置。创建虚拟机时可以指定 VLAN 标签。VLAN 标签是访客网络配置的一部分。网络层根据桥接配置支持不同的模式来实现 VLAN:
-
VLAN awareness on the Linux bridge: In this case, each guest’s virtual network card is assigned to a VLAN tag, which is transparently supported by the Linux bridge. Trunk mode is also possible, but that makes configuration in the guest necessary.
Linux 桥接的 VLAN 感知:在这种情况下,每个客户机的虚拟网卡都会被分配一个 VLAN 标签,Linux 桥接会透明地支持该标签。也可以使用中继模式,但这需要在客户机内进行配置。 -
"traditional" VLAN on the Linux bridge: In contrast to the VLAN awareness method, this method is not transparent and creates a VLAN device with associated bridge for each VLAN. That is, creating a guest on VLAN 5 for example, would create two interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
Linux 桥接上的“传统”VLAN:与 VLAN 感知方法不同,这种方法不是透明的,会为每个 VLAN 创建一个带有关联桥接的 VLAN 设备。也就是说,例如为 VLAN 5 创建一个客户机,会创建两个接口 eno1.5 和 vmbr0v5,这些接口会一直存在直到系统重启。 -
Open vSwitch VLAN: This mode uses the OVS VLAN feature.
Open vSwitch VLAN:此模式使用 OVS 的 VLAN 功能。 -
Guest configured VLAN: VLANs are assigned inside the guest. In this case, the setup is completely done inside the guest and can not be influenced from the outside. The benefit is that you can use more than one VLAN on a single virtual NIC.
客户机配置的 VLAN:VLAN 在客户机内部分配。在这种情况下,所有设置完全在客户机内部完成,外部无法干预。优点是可以在单个虚拟网卡上使用多个 VLAN。
VLAN on the Host
主机上的 VLAN
To allow host communication with an isolated network. It is possible
to apply VLAN tags to any network device (NIC, Bond, Bridge). In
general, you should configure the VLAN on the interface with the least
abstraction layers between itself and the physical NIC.
为了允许主机与隔离网络通信,可以对任何网络设备(网卡、绑定、桥接)应用 VLAN 标签。通常,应在与物理网卡之间抽象层最少的接口上配置 VLAN。
For example, in a default configuration where you want to place
the host management address on a separate VLAN.
例如,在默认配置中,如果您想将主机管理地址放置在单独的 VLAN 上。
示例:使用 VLAN 5 为 Proxmox VE 管理 IP 配置传统的 Linux 桥接。
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno1.5 inet manual
auto vmbr0v5
iface vmbr0v5 inet static
address 10.10.10.2/24
gateway 10.10.10.1
bridge-ports eno1.5
bridge-stp off
bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
示例:使用 VLAN 5 为 Proxmox VE 管理 IP 配置支持 VLAN 的 Linux 桥接
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0.5
iface vmbr0.5 inet static
address 10.10.10.2/24
gateway 10.10.10.1
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
The next example is the same setup but a bond is used to
make this network fail-safe.
下一个示例是相同的设置,但使用了绑定接口以实现网络故障保护。
示例:使用 VLAN 5 和 bond0 为 Proxmox VE 管理 IP 配置传统的 Linux 桥接
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
iface bond0.5 inet manual
auto vmbr0v5
iface vmbr0v5 inet static
address 10.10.10.2/24
gateway 10.10.10.1
bridge-ports bond0.5
bridge-stp off
bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
3.4.9. Disabling IPv6 on the Node
3.4.9. 在节点上禁用 IPv6
Proxmox VE works correctly in all environments, irrespective of whether IPv6 is
deployed or not. We recommend leaving all settings at the provided defaults.
Proxmox VE 在所有环境中均能正常工作,无论是否部署了 IPv6。我们建议保持所有设置为默认值。
Should you still need to disable support for IPv6 on your node, do so by
creating an appropriate sysctl.conf (5) snippet file and setting the proper
sysctls,
for example adding /etc/sysctl.d/disable-ipv6.conf with content:
如果您仍然需要在节点上禁用对 IPv6 的支持,可以通过创建适当的 sysctl.conf (5) 片段文件并设置相应的 sysctl 来实现,例如添加 /etc/sysctl.d/disable-ipv6.conf 文件,内容如下:
net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1
This method is preferred to disabling the loading of the IPv6 module on the
kernel commandline.
这种方法优于在内核命令行中禁用 IPv6 模块的加载。
3.4.10. Disabling MAC Learning on a Bridge
3.4.10. 禁用跨链桥上的 MAC 学习
By default, MAC learning is enabled on a bridge to ensure a smooth experience
with virtual guests and their networks.
默认情况下,桥接上启用了 MAC 学习,以确保虚拟客户机及其网络的顺畅体验。
But in some environments this can be undesired. Since Proxmox VE 7.3 you can disable
MAC learning on the bridge by setting the ‘bridge-disable-mac-learning 1`
configuration on a bridge in `/etc/network/interfaces’, for example:
但在某些环境中,这可能是不希望的。从 Proxmox VE 7.3 开始,您可以通过在`/etc/network/interfaces`中的桥接上设置`bridge-disable-mac-learning 1`配置来禁用桥接上的 MAC 学习,例如:
# ...
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2/24
gateway 10.10.10.1
bridge-ports ens18
bridge-stp off
bridge-fd 0
bridge-disable-mac-learning 1
Once enabled, Proxmox VE will manually add the configured MAC address from VMs and
Containers to the bridges forwarding database to ensure that guest can still
use the network - but only when they are using their actual MAC address.
启用后,Proxmox VE 将手动将虚拟机和容器配置的 MAC 地址添加到桥接的转发表中,以确保客户机仍然可以使用网络——但仅当它们使用其实际的 MAC 地址时。
3.5. Time Synchronization
3.5. 时间同步
The Proxmox VE cluster stack itself relies heavily on the fact that all
the nodes have precisely synchronized time. Some other components,
like Ceph, also won’t work properly if the local time on all nodes is
not in sync.
Proxmox VE 集群堆栈本身非常依赖所有节点的时间完全同步。一些其他组件,如 Ceph,如果所有节点的本地时间不同步,也无法正常工作。
Time synchronization between nodes can be achieved using the “Network
Time Protocol” (NTP). As of Proxmox VE 7, chrony is used as the default
NTP daemon, while Proxmox VE 6 uses systemd-timesyncd. Both come preconfigured to
use a set of public servers.
节点之间的时间同步可以通过“网络时间协议”(NTP)实现。从 Proxmox VE 7 开始,默认使用 chrony 作为 NTP 守护进程,而 Proxmox VE 6 使用 systemd-timesyncd。两者都预配置为使用一组公共服务器。
|
|
If you upgrade your system to Proxmox VE 7, it is recommended that you
manually install either chrony, ntp or openntpd. 如果您将系统升级到 Proxmox VE 7,建议您手动安装 chrony、ntp 或 openntpd 之一。 |
3.5.1. Using Custom NTP Servers
3.5.1. 使用自定义 NTP 服务器
In some cases, it might be desired to use non-default NTP
servers. For example, if your Proxmox VE nodes do not have access to the
public internet due to restrictive firewall rules, you
need to set up local NTP servers and tell the NTP daemon to use
them.
在某些情况下,可能需要使用非默认的 NTP 服务器。例如,如果您的 Proxmox VE 节点由于严格的防火墙规则无法访问公共互联网,则需要设置本地 NTP 服务器,并告诉 NTP 守护进程使用它们。
For systems using chrony:
对于使用 chrony 的系统:
Specify which servers chrony should use in /etc/chrony/chrony.conf:
在 /etc/chrony/chrony.conf 中指定 chrony 应使用的服务器:
server ntp1.example.com iburst server ntp2.example.com iburst server ntp3.example.com iburst
Restart chrony: 重启 chrony:
# systemctl restart chronyd
Check the journal to confirm that the newly configured NTP servers are being
used:
检查日志以确认新配置的 NTP 服务器正在被使用:
# journalctl --since -1h -u chrony
... Aug 26 13:00:09 node1 systemd[1]: Started chrony, an NTP client/server. Aug 26 13:00:15 node1 chronyd[4873]: Selected source 10.0.0.1 (ntp1.example.com) Aug 26 13:00:15 node1 chronyd[4873]: System clock TAI offset set to 37 seconds ...
For systems using systemd-timesyncd:
对于使用 systemd-timesyncd 的系统:
Specify which servers systemd-timesyncd should use in
/etc/systemd/timesyncd.conf:
在 /etc/systemd/timesyncd.conf 中指定 systemd-timesyncd 应使用的服务器:
[Time] NTP=ntp1.example.com ntp2.example.com ntp3.example.com ntp4.example.com
Then, restart the synchronization service (systemctl restart
systemd-timesyncd), and verify that your newly configured NTP servers are in
use by checking the journal (journalctl --since -1h -u systemd-timesyncd):
然后,重启同步服务(systemctl restart systemd-timesyncd),并通过检查日志(journalctl --since -1h -u systemd-timesyncd)验证新配置的 NTP 服务器是否正在使用:
... Oct 07 14:58:36 node1 systemd[1]: Stopping Network Time Synchronization... Oct 07 14:58:36 node1 systemd[1]: Starting Network Time Synchronization... Oct 07 14:58:36 node1 systemd[1]: Started Network Time Synchronization. Oct 07 14:58:36 node1 systemd-timesyncd[13514]: Using NTP server 10.0.0.1:123 (ntp1.example.com). Oct 07 14:58:36 node1 systemd-timesyncd[13514]: interval/delta/delay/jitter/drift 64s/-0.002s/0.020s/0.000s/-31ppm ...
3.6. External Metric Server
3.6. 外部指标服务器
In Proxmox VE, you can define external metric servers, which will periodically
receive various stats about your hosts, virtual guests and storages.
在 Proxmox VE 中,您可以定义外部指标服务器,这些服务器将定期接收有关您的主机、虚拟客户机和存储的各种统计数据。
Currently supported are: 当前支持的有:
-
Graphite (see https://graphiteapp.org )
Graphite(参见 https://graphiteapp.org ) -
InfluxDB (see https://www.influxdata.com/time-series-platform/influxdb/ )
InfluxDB(参见 https://www.influxdata.com/time-series-platform/influxdb/)
The external metric server definitions are saved in /etc/pve/status.cfg, and
can be edited through the web interface.
外部指标服务器的定义保存在 /etc/pve/status.cfg 中,可以通过网页界面进行编辑。
3.6.1. Graphite server configuration
3.6.1. Graphite 服务器配置
The default port is set to 2003 and the default graphite path is proxmox.
默认端口设置为 2003,默认的 graphite 路径是 proxmox。
By default, Proxmox VE sends the data over UDP, so the graphite server has to be
configured to accept this. Here the maximum transmission unit (MTU) can be
configured for environments not using the standard 1500 MTU.
默认情况下,Proxmox VE 通过 UDP 发送数据,因此必须配置 graphite 服务器以接受此类数据。这里可以为不使用标准 1500 MTU 的环境配置最大传输单元(MTU)。
You can also configure the plugin to use TCP. In order not to block the
important pvestatd statistic collection daemon, a timeout is required to cope
with network problems.
您也可以配置插件使用 TCP。为了不阻塞重要的 pvestatd 统计收集守护进程,需要设置超时以应对网络问题。
3.6.2. Influxdb plugin configuration
3.6.2. Influxdb 插件配置
Proxmox VE sends the data over UDP, so the influxdb server has to be configured for
this. The MTU can also be configured here, if necessary.
Proxmox VE 通过 UDP 发送数据,因此必须配置 influxdb 服务器以支持此方式。如有必要,这里也可以配置 MTU。
Here is an example configuration for influxdb (on your influxdb server):
以下是 influxdb(在您的 influxdb 服务器上)的示例配置:
[[udp]] enabled = true bind-address = "0.0.0.0:8089" database = "proxmox" batch-size = 1000 batch-timeout = "1s"
With this configuration, your server listens on all IP addresses on port 8089,
and writes the data in the proxmox database
使用此配置,您的服务器将在所有 IP 地址的 8089 端口监听,并将数据写入 proxmox 数据库
Alternatively, the plugin can be configured to use the http(s) API of InfluxDB 2.x.
InfluxDB 1.8.x does contain a forwards compatible API endpoint for this v2 API.
或者,插件可以配置为使用 InfluxDB 2.x 的 http(s) API。InfluxDB 1.8.x 包含一个向前兼容的 v2 API 端点。
To use it, set influxdbproto to http or https (depending on your configuration).
By default, Proxmox VE uses the organization proxmox and the bucket/db proxmox
(They can be set with the configuration organization and bucket respectively).
要使用它,请将 influxdbproto 设置为 http 或 https(取决于您的配置)。默认情况下,Proxmox VE 使用组织 proxmox 和 bucket/db proxmox(它们可以分别通过配置 organization 和 bucket 设置)。
Since InfluxDB’s v2 API is only available with authentication, you have
to generate a token that can write into the correct bucket and set it.
由于 InfluxDB 的 v2 API 仅在认证后可用,您必须生成一个可以写入正确桶的代币并进行设置。
In the v2 compatible API of 1.8.x, you can use user:password as token
(if required), and can omit the organization since that has no meaning in InfluxDB 1.x.
在 1.8.x 的 v2 兼容 API 中,您可以使用 user:password 作为代币(如果需要),并且可以省略组织,因为在 InfluxDB 1.x 中组织没有意义。
You can also set the HTTP Timeout (default is 1s) with the timeout setting,
as well as the maximum batch size (default 25000000 bytes) with the
max-body-size setting (this corresponds to the InfluxDB setting with the
same name).
您还可以通过 timeout 设置 HTTP 超时时间(默认是 1 秒),以及通过 max-body-size 设置最大批量大小(默认 25000000 字节)(这对应于 InfluxDB 中同名的设置)。
3.7. Disk Health Monitoring
3.7. 磁盘健康监控
Although a robust and redundant storage is recommended,
it can be very helpful to monitor the health of your local disks.
虽然建议使用强大且冗余的存储,但监控本地磁盘的健康状况非常有帮助。
Starting with Proxmox VE 4.3, the package smartmontools [2]
is installed and required. This is a set of tools to monitor and control
the S.M.A.R.T. system for local hard disks.
从 Proxmox VE 4.3 开始,已安装并要求使用包 smartmontools [2]。这是一套用于监控和控制本地硬盘 S.M.A.R.T. 系统的工具。
You can get the status of a disk by issuing the following command:
您可以通过执行以下命令来获取磁盘状态:
# smartctl -a /dev/sdX
where /dev/sdX is the path to one of your local disks.
其中 /dev/sdX 是您本地磁盘的路径之一。
If the output says: 如果输出显示:
SMART support is: Disabled
you can enable it with the command:
你可以使用以下命令启用它:
# smartctl -s on /dev/sdX
For more information on how to use smartctl, please see man smartctl.
有关如何使用 smartctl 的更多信息,请参见 man smartctl。
By default, smartmontools daemon smartd is active and enabled, and scans
the disks under /dev/sdX and /dev/hdX every 30 minutes for errors and warnings, and sends an
e-mail to root if it detects a problem.
默认情况下,smartmontools 守护进程 smartd 是激活并启用的,它每 30 分钟扫描一次 /dev/sdX 和 /dev/hdX 下的磁盘,检查错误和警告,如果检测到问题,会发送电子邮件给 root。
For more information about how to configure smartd, please see man smartd and
man smartd.conf.
有关如何配置 smartd 的更多信息,请参见 man smartd 和 man smartd.conf。
If you use your hard disks with a hardware raid controller, there are most likely tools
to monitor the disks in the raid array and the array itself. For more information about this,
please refer to the vendor of your raid controller.
如果您使用带有硬件 RAID 控制器的硬盘,通常会有用于监控 RAID 阵列中硬盘及阵列本身的工具。有关更多信息,请参考您的 RAID 控制器供应商。
3.8. Logical Volume Manager (LVM)
3.8. 逻辑卷管理器(LVM)
Most people install Proxmox VE directly on a local disk. The Proxmox VE
installation CD offers several options for local disk management, and
the current default setup uses LVM. The installer lets you select a
single disk for such setup, and uses that disk as physical volume for
the Volume Group (VG) pve. The following output is from a
test installation using a small 8GB disk:
大多数人直接将 Proxmox VE 安装在本地磁盘上。Proxmox VE 安装光盘提供了多种本地磁盘管理选项,目前的默认设置使用 LVM。安装程序允许您选择单个磁盘进行此类设置,并将该磁盘用作卷组(VG)pve 的物理卷。以下输出来自使用一块小型 8GB 磁盘的测试安装:
# pvs PV VG Fmt Attr PSize PFree /dev/sda3 pve lvm2 a-- 7.87g 876.00m # vgs VG #PV #LV #SN Attr VSize VFree pve 1 3 0 wz--n- 7.87g 876.00m
The installer allocates three Logical Volumes (LV) inside this
VG:
安装程序在此卷组(VG)内分配了三个逻辑卷(LV):
# lvs LV VG Attr LSize Pool Origin Data% Meta% data pve twi-a-tz-- 4.38g 0.00 0.63 root pve -wi-ao---- 1.75g swap pve -wi-ao---- 896.00m
- root
-
Formatted as ext4, and contains the operating system.
格式化为 ext4,包含操作系统。 - swap
-
Swap partition 交换分区
- data 数据
-
This volume uses LVM-thin, and is used to store VM images. LVM-thin is preferable for this task, because it offers efficient support for snapshots and clones.
该卷使用 LVM-thin,用于存储虚拟机镜像。LVM-thin 适合此任务,因为它对快照和克隆提供了高效的支持。
For Proxmox VE versions up to 4.1, the installer creates a standard logical
volume called “data”, which is mounted at /var/lib/vz.
对于 Proxmox VE 4.1 及之前版本,安装程序会创建一个名为“data”的标准逻辑卷,挂载在/var/lib/vz。
Starting from version 4.2, the logical volume “data” is a LVM-thin pool,
used to store block based guest images, and /var/lib/vz is simply a
directory on the root file system.
从 4.2 版本开始,逻辑卷“data”是一个 LVM-thin 池,用于存储基于区块的客户机镜像,而/var/lib/vz 只是根文件系统上的一个目录。
3.8.1. Hardware 3.8.1. 硬件
We highly recommend to use a hardware RAID controller (with BBU) for
such setups. This increases performance, provides redundancy, and make
disk replacements easier (hot-pluggable).
我们强烈建议在此类配置中使用带有 BBU 的硬件 RAID 控制器。这可以提升性能,提供冗余,并使磁盘更换更方便(支持热插拔)。
LVM itself does not need any special hardware, and memory requirements
are very low.
LVM 本身不需要任何特殊硬件,且内存需求非常低。
3.8.2. Bootloader 3.8.2. 引导加载程序
We install two boot loaders by default. The first partition contains
the standard GRUB boot loader. The second partition is an EFI System
Partition (ESP), which makes it possible to boot on EFI systems and to
apply persistent firmware updates from the
user space.
我们默认安装两个引导加载程序。第一个分区包含标准的 GRUB 引导加载程序。第二个分区是 EFI 系统分区(ESP),它使得在 EFI 系统上启动成为可能,并且可以从用户空间应用持久的固件更新。
3.8.3. Creating a Volume Group
3.8.3. 创建卷组
Let’s assume we have an empty disk /dev/sdb, onto which we want to
create a volume group named “vmdata”.
假设我们有一个空磁盘/dev/sdb,我们想在其上创建一个名为“vmdata”的卷组。
|
|
Please note that the following commands will destroy all
existing data on /dev/sdb. 请注意,以下命令将销毁 /dev/sdb 上的所有现有数据。 |
First create a partition.
首先创建一个分区。
# sgdisk -N 1 /dev/sdb
Create a Physical Volume (PV) without confirmation and 250K
metadatasize.
创建一个物理卷(PV),不需要确认,元数据大小为 250K。
# pvcreate --metadatasize 250k -y -ff /dev/sdb1
Create a volume group named “vmdata” on /dev/sdb1
在 /dev/sdb1 上创建一个名为“vmdata”的卷组。
# vgcreate vmdata /dev/sdb1
3.8.4. Creating an extra LV for /var/lib/vz
3.8.4. 为 /var/lib/vz 创建额外的逻辑卷
This can be easily done by creating a new thin LV.
这可以通过创建一个新的精简逻辑卷轻松完成。
# lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool>
A real world example: 一个实际的例子:
# lvcreate -n vz -V 10G pve/data
Now a filesystem must be created on the LV.
现在必须在该逻辑卷上创建文件系统。
# mkfs.ext4 /dev/pve/vz
At last this has to be mounted.
最后必须挂载它。
|
|
be sure that /var/lib/vz is empty. On a default
installation it’s not. 确保 /var/lib/vz 是空的。默认安装时它不是空的。 |
To make it always accessible add the following line in /etc/fstab.
为了使其始终可访问,在 /etc/fstab 中添加以下行。
# echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab
3.8.5. Resizing the thin pool
3.8.5. 调整精简池大小
Resize the LV and the metadata pool with the following command:
使用以下命令调整逻辑卷和元数据池的大小:
# lvresize --size +<size[\M,G,T]> --poolmetadatasize +<size[\M,G]> <VG>/<LVThin_pool>
|
|
When extending the data pool, the metadata pool must also be
extended. 扩展数据池时,元数据池也必须同时扩展。 |
3.9. ZFS on Linux
3.9. Linux 上的 ZFS
ZFS is a combined file system and logical volume manager designed by
Sun Microsystems. Starting with Proxmox VE 3.4, the native Linux
kernel port of the ZFS file system is introduced as optional
file system and also as an additional selection for the root
file system. There is no need for manually compile ZFS modules - all
packages are included.
ZFS 是由 Sun Microsystems 设计的结合文件系统和逻辑卷管理器。从 Proxmox VE 3.4 开始,引入了 ZFS 文件系统的原生 Linux 内核移植版本,作为可选文件系统,同时也作为根文件系统的额外选择。无需手动编译 ZFS 模块——所有软件包均已包含。
By using ZFS, its possible to achieve maximum enterprise features with
low budget hardware, but also high performance systems by leveraging
SSD caching or even SSD only setups. ZFS can replace cost intense
hardware raid cards by moderate CPU and memory load combined with easy
management.
通过使用 ZFS,可以在低预算硬件上实现最大企业级功能,也可以通过利用 SSD 缓存甚至纯 SSD 配置实现高性能系统。ZFS 可以用适度的 CPU 和内存负载结合简便的管理,替代成本高昂的硬件 RAID 卡。
-
Easy configuration and management with Proxmox VE GUI and CLI.
通过 Proxmox VE 图形界面和命令行界面轻松配置和管理。 -
Reliable 可靠
-
Protection against data corruption
防止数据损坏 -
Data compression on file system level
文件系统级别的数据压缩 -
Snapshots 快照
-
Copy-on-write clone 写时复制克隆
-
Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2, RAIDZ-3, dRAID, dRAID2, dRAID3
各种 RAID 级别:RAID0、RAID1、RAID10、RAIDZ-1、RAIDZ-2、RAIDZ-3、dRAID、dRAID2、dRAID3 -
Can use SSD for cache
可以使用 SSD 作为缓存 -
Self healing 自我修复
-
Continuous integrity checking
持续完整性检查 -
Designed for high storage capacities
为大容量存储设计 -
Asynchronous replication over network
通过网络进行异步复制 -
Open Source 开源
-
Encryption 加密
-
…
3.9.1. Hardware 3.9.1. 硬件
ZFS depends heavily on memory, so you need at least 8GB to start. In
practice, use as much as you can get for your hardware/budget. To prevent
data corruption, we recommend the use of high quality ECC RAM.
ZFS 对内存依赖很大,因此至少需要 8GB 才能启动。实际上,应根据您的硬件和预算尽可能多地使用内存。为了防止数据损坏,我们建议使用高质量的 ECC 内存。
If you use a dedicated cache and/or log disk, you should use an
enterprise class SSD. This can
increase the overall performance significantly.
如果您使用专用的缓存和/或日志磁盘,应该使用企业级 SSD。这可以显著提升整体性能。
|
|
Do not use ZFS on top of a hardware RAID controller which has its
own cache management. ZFS needs to communicate directly with the disks. An
HBA adapter or something like an LSI controller flashed in “IT” mode is more
appropriate. 不要在带有自身缓存管理的硬件 RAID 控制器之上使用 ZFS。ZFS 需要直接与磁盘通信。HBA 适配器或类似于以“IT”模式刷写的 LSI 控制器更为合适。 |
If you are experimenting with an installation of Proxmox VE inside a VM
(Nested Virtualization), don’t use virtio for disks of that VM,
as they are not supported by ZFS. Use IDE or SCSI instead (also works
with the virtio SCSI controller type).
如果您在虚拟机内(嵌套虚拟化)尝试安装 Proxmox VE,不要为该虚拟机的磁盘使用 virtio,因为 ZFS 不支持它们。请改用 IDE 或 SCSI(也适用于 virtio SCSI 控制器类型)。
3.9.2. Installation as Root File System
3.9.2. 作为根文件系统的安装
When you install using the Proxmox VE installer, you can choose ZFS for the
root file system. You need to select the RAID type at installation
time:
当您使用 Proxmox VE 安装程序安装时,可以选择 ZFS 作为根文件系统。您需要在安装时选择 RAID 类型:
|
RAID0
|
Also called “striping”. The capacity of such volume is the sum
of the capacities of all disks. But RAID0 does not add any redundancy,
so the failure of a single drive makes the volume unusable.
|
|
RAID1
|
Also called “mirroring”. Data is written identically to all
disks. This mode requires at least 2 disks with the same size. The
resulting capacity is that of a single disk.
|
|
RAID10
|
A combination of RAID0 and RAID1. Requires at least 4 disks.
|
|
RAIDZ-1
|
A variation on RAID-5, single parity. Requires at least 3 disks.
|
|
RAIDZ-2
|
A variation on RAID-5, double parity. Requires at least 4 disks.
|
|
RAIDZ-3
|
A variation on RAID-5, triple parity. Requires at least 5 disks.
|
The installer automatically partitions the disks, creates a ZFS pool
called rpool, and installs the root file system on the ZFS subvolume
rpool/ROOT/pve-1.
安装程序会自动分区磁盘,创建一个名为 rpool 的 ZFS 池,并将根文件系统安装在 ZFS 子卷 rpool/ROOT/pve-1 上。
Another subvolume called rpool/data is created to store VM
images. In order to use that with the Proxmox VE tools, the installer
creates the following configuration entry in /etc/pve/storage.cfg:
另一个名为 rpool/data 的子卷被创建用于存储虚拟机镜像。为了使 Proxmox VE 工具能够使用它,安装程序在 /etc/pve/storage.cfg 中创建了以下配置条目:
zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir
After installation, you can view your ZFS pool status using the
zpool command:
安装完成后,您可以使用 zpool 命令查看您的 ZFS 池状态:
# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
errors: No known data errors
The zfs command is used to configure and manage your ZFS file systems. The
following command lists all file systems after installation:
zfs 命令用于配置和管理您的 ZFS 文件系统。安装完成后,以下命令列出所有文件系统:
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.94G 7.68T 96K /rpool rpool/ROOT 702M 7.68T 96K /rpool/ROOT rpool/ROOT/pve-1 702M 7.68T 702M / rpool/data 96K 7.68T 96K /rpool/data rpool/swap 4.25G 7.69T 64K -
3.9.3. ZFS RAID Level Considerations
3.9.3. ZFS RAID 级别考虑事项
There are a few factors to take into consideration when choosing the layout of
a ZFS pool. The basic building block of a ZFS pool is the virtual device, or
vdev. All vdevs in a pool are used equally and the data is striped among them
(RAID0). Check the zpoolconcepts(7) manpage for more details on vdevs.
在选择 ZFS 池的布局时,需要考虑几个因素。ZFS 池的基本构建块是虚拟设备,或称 vdev。池中的所有 vdev 都会被均等使用,数据会在它们之间进行条带化(RAID0)。有关 vdev 的更多详细信息,请查阅 zpoolconcepts(7)手册页。
Performance 性能
Each vdev type has different performance behaviors. The two
parameters of interest are the IOPS (Input/Output Operations per Second) and
the bandwidth with which data can be written or read.
每种 vdev 类型的性能表现不同。两个关注的参数是 IOPS(每秒输入/输出操作次数)和数据读写的带宽。
A mirror vdev (RAID1) will approximately behave like a single disk in regard
to both parameters when writing data. When reading data the performance will
scale linearly with the number of disks in the mirror.
镜像 vdev(RAID1)在写入数据时,其两个参数的表现大致相当于单个磁盘。读取数据时,性能会随着镜像中磁盘数量的增加而线性提升。
A common situation is to have 4 disks. When setting it up as 2 mirror vdevs
(RAID10) the pool will have the write characteristics as two single disks in
regard to IOPS and bandwidth. For read operations it will resemble 4 single
disks.
一种常见的情况是有 4 个磁盘。当将其设置为 2 个镜像 vdev(RAID10)时,存储池在 IOPS 和带宽方面的写入特性将类似于两个单独的磁盘。对于读取操作,它将类似于 4 个单独的磁盘。
A RAIDZ of any redundancy level will approximately behave like a single disk
in regard to IOPS with a lot of bandwidth. How much bandwidth depends on the
size of the RAIDZ vdev and the redundancy level.
任何冗余级别的 RAIDZ 在 IOPS 方面大致表现得像一个单独的磁盘,但带宽很大。带宽的多少取决于 RAIDZ vdev 的大小和冗余级别。
A dRAID pool should match the performance of an equivalent RAIDZ pool.
dRAID 存储池的性能应与同等的 RAIDZ 存储池相匹配。
For running VMs, IOPS is the more important metric in most situations.
对于运行虚拟机来说,在大多数情况下,IOPS 是更重要的指标。
Size, Space usage and Redundancy
大小、空间使用和冗余
While a pool made of mirror vdevs will have the best performance
characteristics, the usable space will be 50% of the disks available. Less if a
mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At
least one healthy disk per mirror is needed for the pool to stay functional.
虽然由镜像 vdev 组成的存储池具有最佳的性能特性,但可用空间将是磁盘总容量的 50%。如果镜像 vdev 由超过 2 块磁盘组成,例如 3 路镜像,则可用空间会更少。每个镜像至少需要一块健康的磁盘,存储池才能保持正常运行。
The usable space of a RAIDZ type vdev of N disks is roughly N-P, with P being
the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail
without losing data. A special case is a 4 disk pool with RAIDZ2. In this
situation it is usually better to use 2 mirror vdevs for the better performance
as the usable space will be the same.
N 块磁盘组成的 RAIDZ 类型 vdev 的可用空间大致为 N-P,其中 P 是 RAIDZ 级别。RAIDZ 级别表示在不丢失数据的情况下,允许任意多少块磁盘故障。一个特殊情况是 4 块磁盘的 RAIDZ2 存储池。在这种情况下,通常使用 2 个镜像 vdev 以获得更好的性能,因为可用空间是相同的。
Another important factor when using any RAIDZ level is how ZVOL datasets, which
are used for VM disks, behave. For each data block the pool needs parity data
which is at least the size of the minimum block size defined by the ashift
value of the pool. With an ashift of 12 the block size of the pool is 4k. The
default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block
written will cause two additional 4k parity blocks to be written,
8k + 4k + 4k = 16k. This is of course a simplified approach and the real
situation will be slightly different with metadata, compression and such not
being accounted for in this example.
使用任何 RAIDZ 级别时,另一个重要因素是 ZVOL 数据集(用于虚拟机磁盘)的行为。对于每个数据块,存储池需要的校验数据大小至少与存储池的 ashift 值定义的最小块大小相同。ashift 为 12 时,存储池的块大小为 4k。ZVOL 的默认块大小为 8k。因此,在 RAIDZ2 中,每写入一个 8k 块,将导致额外写入两个 4k 的校验块,8k + 4k + 4k = 16k。当然,这是一种简化的计算方法,实际情况会因元数据、压缩等因素而略有不同,这些因素在此示例中未被考虑。
This behavior can be observed when checking the following properties of the
ZVOL:
可以通过检查 ZVOL 的以下属性来观察这种行为:
-
volsize
-
refreservation (if the pool is not thin provisioned)
refreservation(如果存储池不是精简配置) -
used (if the pool is thin provisioned and without snapshots present)
已使用(如果存储池是精简配置且没有快照存在)
# zfs get volsize,refreservation,used <pool>/vm-<vmid>-disk-X
volsize is the size of the disk as it is presented to the VM, while
refreservation shows the reserved space on the pool which includes the
expected space needed for the parity data. If the pool is thin provisioned, the
refreservation will be set to 0. Another way to observe the behavior is to
compare the used disk space within the VM and the used property. Be aware
that snapshots will skew the value.
volsize 是呈现给虚拟机的磁盘大小,而 refreservation 显示存储池上保留的空间,包括用于奇偶校验数据的预期空间。如果存储池是精简配置,refreservation 将被设置为 0。另一种观察行为的方法是比较虚拟机内使用的磁盘空间和 used 属性。请注意,快照会影响该值。
There are a few options to counter the increased use of space:
有几种选项可以应对空间使用量的增加:
-
Increase the volblocksize to improve the data to parity ratio
增大 volblocksize 以改善数据与奇偶校验的比例 -
Use mirror vdevs instead of RAIDZ
使用镜像 vdev 代替 RAIDZ -
Use ashift=9 (block size of 512 bytes)
使用 ashift=9(块大小为 512 字节)
The volblocksize property can only be set when creating a ZVOL. The default
value can be changed in the storage configuration. When doing this, the guest
needs to be tuned accordingly and depending on the use case, the problem of
write amplification is just moved from the ZFS layer up to the guest.
volblocksize 属性只能在创建 ZVOL 时设置。默认值可以在存储配置中更改。进行此操作时,客户机需要相应调整,并且根据使用场景,写放大问题只是从 ZFS 层转移到了客户机层。
Using ashift=9 when creating the pool can lead to bad
performance, depending on the disks underneath, and cannot be changed later on.
在创建存储池时使用 ashift=9 可能会导致性能不佳,这取决于底层磁盘,并且之后无法更改。
Mirror vdevs (RAID1, RAID10) have favorable behavior for VM workloads. Use
them, unless your environment has specific needs and characteristics where
RAIDZ performance characteristics are acceptable.
镜像 vdev(RAID1,RAID10)对虚拟机工作负载表现良好。除非您的环境有特定需求和特性,且 RAIDZ 的性能特性可以接受,否则建议使用镜像 vdev。
3.9.4. ZFS dRAID
In a ZFS dRAID (declustered RAID) the hot spare drive(s) participate in the RAID.
Their spare capacity is reserved and used for rebuilding when one drive fails.
This provides, depending on the configuration, faster rebuilding compared to a
RAIDZ in case of drive failure. More information can be found in the official
OpenZFS documentation. [3]
在 ZFS dRAID(去簇 RAID)中,热备盘参与 RAID。它们的备用容量被保留,并在某个磁盘故障时用于重建。根据配置不同,这相比 RAIDZ 在磁盘故障时提供更快的重建速度。更多信息可参见官方 OpenZFS 文档。[3]
|
|
dRAID is intended for more than 10-15 disks in a dRAID. A RAIDZ
setup should be better for a lower amount of disks in most use cases. dRAID 适用于超过 10-15 块磁盘的场景。在大多数使用情况下,磁盘数量较少时,RAIDZ 配置通常更合适。 |
|
|
The GUI requires one more disk than the minimum (i.e. dRAID1 needs 3). It
expects that a spare disk is added as well. 图形界面比最低要求多一个磁盘(即 dRAID1 需要 3 个)。它还期望添加一个备用磁盘。 |
-
dRAID1 or dRAID: requires at least 2 disks, one can fail before data is lost
dRAID1 或 dRAID:至少需要 2 个磁盘,允许一个磁盘故障而不丢失数据 -
dRAID2: requires at least 3 disks, two can fail before data is lost
dRAID2:至少需要 3 个磁盘,允许两个磁盘故障而不丢失数据 -
dRAID3: requires at least 4 disks, three can fail before data is lost
dRAID3:至少需要 4 个磁盘,允许三个磁盘故障而不丢失数据
Additional information can be found on the manual page:
更多信息可以在手册页中找到:
# man zpoolconcepts
Spares and Data 备用盘和数据
The number of spares tells the system how many disks it should keep ready in
case of a disk failure. The default value is 0 spares. Without spares,
rebuilding won’t get any speed benefits.
备用盘数量告诉系统在磁盘故障时应准备多少块磁盘。默认值是 0 个备用盘。没有备用盘,重建过程不会获得任何速度上的提升。
data defines the number of devices in a redundancy group. The default value is
8. Except when disks - parity - spares equal something less than 8, the lower
number is used. In general, a smaller number of data devices leads to higher
IOPS, better compression ratios and faster resilvering, but defining fewer data
devices reduces the available storage capacity of the pool.
data 定义了冗余组中的设备数量。默认值是 8。除非磁盘数 - 奇偶校验数 - 备用盘数小于 8,否则使用较小的数值。一般来说,较少的数据设备数量会带来更高的 IOPS、更好的压缩比和更快的重建速度,但定义较少的数据设备会减少存储池的可用容量。
3.9.5. Bootloader 3.9.5. 引导加载程序
Proxmox VE uses proxmox-boot-tool to manage the
bootloader configuration.
See the chapter on Proxmox VE host bootloaders for details.
Proxmox VE 使用 proxmox-boot-tool 来管理引导加载程序的配置。详情请参见 Proxmox VE 主机引导加载程序章节。
3.9.6. ZFS Administration
3.9.6. ZFS 管理
This section gives you some usage examples for common tasks. ZFS
itself is really powerful and provides many options. The main commands
to manage ZFS are zfs and zpool. Both commands come with great
manual pages, which can be read with:
本节为您提供一些常见任务的使用示例。ZFS 本身功能强大,提供了许多选项。管理 ZFS 的主要命令是 zfs 和 zpool。两者都附带了详尽的手册页,可以通过以下命令查看:
# man zpool # man zfs
Create a new zpool
创建一个新的 zpool
To create a new pool, at least one disk is needed. The ashift should have the
same sector-size (2 power of ashift) or larger as the underlying disk.
要创建一个新的存储池,至少需要一块磁盘。ashift 应该与底层磁盘的扇区大小(2 的 ashift 次方)相同或更大。
# zpool create -f -o ashift=12 <pool> <device>
|
|
Pool names must adhere to the following rules:
|
To activate compression (see section Compression in ZFS):
要启用压缩(参见 ZFS 中的压缩章节):
# zfs set compression=lz4 <pool>
Create a new pool with RAID-0
创建一个新的 RAID-0 池
Minimum 1 disk 至少 1 个磁盘
# zpool create -f -o ashift=12 <pool> <device1> <device2>
Create a new pool with RAID-1
创建一个新的 RAID-1 池
Minimum 2 disks 至少 2 个磁盘
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
Create a new pool with RAID-10
创建一个新的 RAID-10 存储池
Minimum 4 disks 至少 4 块磁盘
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
Create a new pool with RAIDZ-1
创建一个新的 RAIDZ-1 存储池
Minimum 3 disks 至少 3 块磁盘
# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
Create a new pool with RAIDZ-2
创建一个带有 RAIDZ-2 的新存储池
Minimum 4 disks 至少 4 个磁盘
# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
Please read the section for
ZFS RAID Level Considerations
to get a rough estimate on how IOPS and bandwidth expectations before setting up
a pool, especially when wanting to use a RAID-Z mode.
请阅读 ZFS RAID 级别注意事项部分,以便在设置存储池之前对 IOPS 和带宽预期有一个大致的估计,尤其是在想要使用 RAID-Z 模式时。
Create a new pool with cache (L2ARC)
创建一个带有缓存(L2ARC) 的新存储池
It is possible to use a dedicated device, or partition, as second-level cache to
increase the performance. Such a cache device will especially help with
random-read workloads of data that is mostly static. As it acts as additional
caching layer between the actual storage, and the in-memory ARC, it can also
help if the ARC must be reduced due to memory constraints.
可以使用专用设备或分区作为二级缓存以提高性能。这样的缓存设备特别有助于处理大部分静态数据的随机读取工作负载。由于它作为实际存储和内存中的 ARC 之间的额外缓存层,如果由于内存限制必须减少 ARC 大小,它也能提供帮助。
使用磁盘缓存创建 ZFS 池
# zpool create -f -o ashift=12 <pool> <device> cache <cache-device>
Here only a single <device> and a single <cache-device> was used, but it is
possible to use more devices, like it’s shown in
Create a new pool with RAID.
这里只使用了单个<device>和单个<cache-device>,但也可以使用更多设备,如“创建带 RAID 的新池”中所示。
Note that for cache devices no mirror or raid modi exist, they are all simply
accumulated.
请注意,缓存设备没有镜像或 RAID 模式,它们只是简单地累积。
If any cache device produces errors on read, ZFS will transparently divert that
request to the underlying storage layer.
如果任何缓存设备在读取时产生错误,ZFS 会透明地将该请求转向底层存储层。
Create a new pool with log (ZIL)
创建一个带有日志(ZIL)的新存储池
It is possible to use a dedicated drive, or partition, for the ZFS Intent Log
(ZIL), it is mainly used to provide safe synchronous transactions, so often in
performance critical paths like databases, or other programs that issue fsync
operations more frequently.
可以使用专用驱动器或分区作为 ZFS 意图日志(ZIL),它主要用于提供安全的同步事务,因此通常用于性能关键路径,如数据库或其他更频繁执行 fsync 操作的程序。
The pool is used as default ZIL location, diverting the ZIL IO load to a
separate device can, help to reduce transaction latencies while relieving the
main pool at the same time, increasing overall performance.
存储池被用作默认的 ZIL 位置,将 ZIL IO 负载转移到单独的设备可以帮助减少事务延迟,同时减轻主存储池的负担,提高整体性能。
For disks to be used as log devices, directly or through a partition, it’s
recommend to:
对于用作日志设备的磁盘,无论是直接使用还是通过分区,建议:
-
use fast SSDs with power-loss protection, as those have much smaller commit latencies.
使用带有断电保护的高速 SSD,因为它们具有更小的提交延迟。 -
Use at least a few GB for the partition (or whole device), but using more than half of your installed memory won’t provide you with any real advantage.
为分区(或整个设备)至少使用几 GB,但使用超过已安装内存一半的空间不会带来实际优势。
创建带有独立日志设备的 ZFS 池。
# zpool create -f -o ashift=12 <pool> <device> log <log-device>
In the example above, a single <device> and a single <log-device> is used,
but you can also combine this with other RAID variants, as described in the
Create a new pool with RAID section.
在上面的示例中,使用了单个<device>和单个<log-device>,但您也可以将其与其他 RAID 变体结合使用,如“使用 RAID 创建新池”部分所述。
You can also mirror the log device to multiple devices, this is mainly useful to
ensure that performance doesn’t immediately degrades if a single log device
fails.
您还可以将日志设备镜像到多个设备,这主要用于确保如果单个日志设备发生故障,性能不会立即下降。
If all log devices fail the ZFS main pool itself will be used again, until the
log device(s) get replaced.
如果所有日志设备都发生故障,ZFS 主池本身将再次被使用,直到日志设备被更换。
Add cache and log to an existing pool
向现有池添加缓存和日志
If you have a pool without cache and log you can still add both, or just one of
them, at any time.
如果您的存储池没有缓存和日志设备,您仍然可以随时添加两者,或者只添加其中之一。
For example, let’s assume you got a good enterprise SSD with power-loss
protection that you want to use for improving the overall performance of your
pool.
例如,假设您有一块带有断电保护的优质企业级 SSD,您想用它来提升存储池的整体性能。
As the maximum size of a log device should be about half the size of the
installed physical memory, it means that the ZIL will most likely only take up
a relatively small part of the SSD, the remaining space can be used as cache.
由于日志设备的最大大小应约为已安装物理内存的一半,这意味着 ZIL 很可能只占用 SSD 的相对较小部分,剩余空间可以用作缓存。
First you have to create two GPT partitions on the SSD with parted or gdisk.
首先,您需要使用 parted 或 gdisk 在 SSD 上创建两个 GPT 分区。
Then you’re ready to add them to a pool:
然后你就可以将它们添加到一个存储池中:
将一个独立的日志设备和二级缓存都添加到现有的存储池中
# zpool add -f <pool> log <device-part1> cache <device-part2>
Just replace <pool>, <device-part1> and <device-part2> with the pool name
and the two /dev/disk/by-id/ paths to the partitions.
只需将 <pool>、<device-part1> 和 <device-part2> 替换为存储池名称以及两个 /dev/disk/by-id/ 路径对应的分区。
You can also add ZIL and cache separately.
你也可以分别添加 ZIL 和缓存。
向现有的 ZFS 池添加日志设备
# zpool add <pool> log <log-device>
Changing a failed device
更换故障设备
# zpool replace -f <pool> <old-device> <new-device>
更换故障的可启动设备
Depending on how Proxmox VE was installed it is either using systemd-boot or GRUB
through proxmox-boot-tool [4] or plain GRUB as bootloader (see
Host Bootloader). You can check by running:
根据 Proxmox VE 的安装方式,它可能使用 systemd-boot 或通过 proxmox-boot-tool [4] 使用 GRUB,或者使用普通的 GRUB 作为引导加载程序(参见主机引导加载程序)。你可以通过运行以下命令进行检查:
# proxmox-boot-tool status
The first steps of copying the partition table, reissuing GUIDs and replacing
the ZFS partition are the same. To make the system bootable from the new disk,
different steps are needed which depend on the bootloader in use.
复制分区表、重新分配 GUID 以及替换 ZFS 分区的第一步是相同的。为了使系统能够从新磁盘启动,需要根据所使用的引导加载程序采取不同的步骤。
# sgdisk <healthy bootable device> -R <new device> # sgdisk -G <new device> # zpool replace -f <pool> <old zfs partition> <new zfs partition>
|
|
Use the zpool status -v command to monitor how far the resilvering
process of the new disk has progressed. 使用 zpool status -v 命令监控新磁盘的重同步过程进展情况。 |
# proxmox-boot-tool format <new disk's ESP> # proxmox-boot-tool init <new disk's ESP> [grub]
|
|
ESP stands for EFI System Partition, which is set up as partition #2 on
bootable disks when using the Proxmox VE installer since version 5.4. For details,
see
Setting up a new partition for use as synced ESP. ESP 代表 EFI 系统分区,自 Proxmox VE 安装程序 5.4 版本起,在可启动磁盘上作为第 2 分区进行设置。详情请参见设置新的分区以用作同步 ESP。 |
|
|
Make sure to pass grub as mode to proxmox-boot-tool init if
proxmox-boot-tool status indicates your current disks are using GRUB,
especially if Secure Boot is enabled! 如果 proxmox-boot-tool status 显示您当前的磁盘正在使用 GRUB,尤其是在启用了安全启动的情况下,请确保在执行 proxmox-boot-tool init 时将模式设置为 grub! |
# grub-install <new disk>
|
|
Plain GRUB is only used on systems installed with Proxmox VE 6.3 or earlier,
which have not been manually migrated to use proxmox-boot-tool yet. 纯 GRUB 仅用于安装了 Proxmox VE 6.3 或更早版本且尚未手动迁移到使用 proxmox-boot-tool 的系统。 |
3.9.7. Configure E-Mail Notification
3.9.7. 配置电子邮件通知
ZFS comes with an event daemon ZED, which monitors events generated by the ZFS
kernel module. The daemon can also send emails on ZFS events like pool errors.
Newer ZFS packages ship the daemon in a separate zfs-zed package, which should
already be installed by default in Proxmox VE.
ZFS 附带一个事件守护进程 ZED,用于监控由 ZFS 内核模块生成的事件。该守护进程还可以在发生 ZFS 事件(如存储池错误)时发送电子邮件。较新的 ZFS 软件包将该守护进程作为单独的 zfs-zed 软件包提供,Proxmox VE 默认应已安装该软件包。
You can configure the daemon via the file /etc/zfs/zed.d/zed.rc with your
favorite editor. The required setting for email notification is
ZED_EMAIL_ADDR, which is set to root by default.
您可以使用喜欢的编辑器通过文件 /etc/zfs/zed.d/zed.rc 配置该守护进程。电子邮件通知所需的设置是 ZED_EMAIL_ADDR,默认设置为 root。
ZED_EMAIL_ADDR="root"
Please note Proxmox VE forwards mails to root to the email address
configured for the root user.
请注意,Proxmox VE 会将发送给 root 的邮件转发到为 root 用户配置的电子邮件地址。
3.9.8. Limit ZFS Memory Usage
3.9.8. 限制 ZFS 内存使用
ZFS uses 50 % of the host memory for the Adaptive Replacement
Cache (ARC) by default. For new installations starting with Proxmox VE 8.1, the
ARC usage limit will be set to 10 % of the installed physical memory, clamped
to a maximum of 16 GiB. This value is written to /etc/modprobe.d/zfs.conf.
ZFS 默认使用主机内存的 50% 作为自适应替换缓存(ARC)。对于从 Proxmox VE 8.1 开始的新安装,ARC 使用限制将设置为已安装物理内存的 10%,并限制最大值为 16 GiB。该值写入 /etc/modprobe.d/zfs.conf。
Allocating enough memory for the ARC is crucial for IO performance, so reduce it
with caution. As a general rule of thumb, allocate at least 2 GiB Base + 1
GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available
storage space then you should use 10 GiB of memory for the ARC.
为 ARC 分配足够的内存对 IO 性能至关重要,因此请谨慎减少。一般经验法则是,至少分配 2 GiB 基础内存 + 每 TiB 存储 1 GiB。例如,如果您有一个可用存储空间为 8 TiB 的存储池,则应为 ARC 使用 10 GiB 的内存。
ZFS also enforces a minimum value of 64 MiB.
ZFS 还强制执行最小值为 64 MiB。
You can change the ARC usage limit for the current boot (a reboot resets this
change again) by writing to the zfs_arc_max module parameter directly:
您可以通过直接写入 zfs_arc_max 模块参数来更改当前启动的 ARC 使用限制(重启后此更改将重置):
echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
To permanently change the ARC limits, add (or change if already present) the
following line to /etc/modprobe.d/zfs.conf:
要永久更改 ARC 限制,请将以下行添加到 /etc/modprobe.d/zfs.conf 文件中(如果已存在则修改):
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8 GiB (8 * 230).
此示例设置将使用限制为 8 GiB(8 * 2 30 )。
|
|
In case your desired zfs_arc_max value is lower than or equal to
zfs_arc_min (which defaults to 1/32 of the system memory), zfs_arc_max will
be ignored unless you also set zfs_arc_min to at most zfs_arc_max - 1. 如果您希望的 zfs_arc_max 值小于或等于 zfs_arc_min(默认是系统内存的 1/32),则 zfs_arc_max 将被忽略,除非您也将 zfs_arc_min 设置为不大于 zfs_arc_max - 1。 |
echo "$[8 * 1024*1024*1024 - 1]" >/sys/module/zfs/parameters/zfs_arc_min echo "$[8 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
This example setting (temporarily) limits the usage to 8 GiB (8 * 230) on
systems with more than 256 GiB of total memory, where simply setting
zfs_arc_max alone would not work.
此示例设置在总内存超过 256 GiB 的系统上(临时)将使用限制为 8 GiB(8 * 2 30 ),仅设置 zfs_arc_max 本身将无法生效。
|
|
If your root file system is ZFS, you must update your initramfs every
time this value changes: # update-initramfs -u -k all You must reboot to activate these changes. |
3.9.9. SWAP on ZFS
3.9.9. ZFS 上的交换空间
Swap-space created on a zvol may generate some troubles, like blocking the
server or generating a high IO load, often seen when starting a Backup
to an external Storage.
在 zvol 上创建的交换空间可能会引发一些问题,比如阻塞服务器或产生高 IO 负载,这种情况常见于启动备份到外部存储时。
We strongly recommend to use enough memory, so that you normally do not
run into low memory situations. Should you need or want to add swap, it is
preferred to create a partition on a physical disk and use it as a swap device.
You can leave some space free for this purpose in the advanced options of the
installer. Additionally, you can lower the
“swappiness” value. A good value for servers is 10:
我们强烈建议使用足够的内存,这样通常不会遇到内存不足的情况。如果您需要或想要添加交换空间,建议在物理磁盘上创建一个分区并将其用作交换设备。您可以在安装程序的高级选项中为此预留一些空间。此外,您还可以降低“swappiness”值。对于服务器来说,10 是一个不错的值:
# sysctl -w vm.swappiness=10
To make the swappiness persistent, open /etc/sysctl.conf with
an editor of your choice and add the following line:
要使 swappiness 设置持久生效,请使用您选择的编辑器打开 /etc/sysctl.conf 文件,并添加以下行:
vm.swappiness = 10
| Value 值 | Strategy 策略 |
|---|---|
vm.swappiness = 0 |
The kernel will swap only to avoid
an out of memory condition |
vm.swappiness = 1 |
Minimum amount of swapping without
disabling it entirely. |
vm.swappiness = 10 |
This value is sometimes recommended to
improve performance when sufficient memory exists in a system. |
vm.swappiness = 60 |
The default value. 默认值。 |
vm.swappiness = 100 |
The kernel will swap aggressively. |
3.9.10. Encrypted ZFS Datasets
3.9.10. 加密的 ZFS 数据集
|
|
Native ZFS encryption in Proxmox VE is experimental. Known limitations and
issues include Replication with encrypted datasets
[5],
as well as checksum errors when using Snapshots or ZVOLs.
[6] Proxmox VE 中的原生 ZFS 加密处于实验阶段。已知的限制和问题包括使用加密数据集进行复制[5],以及使用快照或 ZVOL 时出现校验和错误[6]。 |
ZFS on Linux version 0.8.0 introduced support for native encryption of
datasets. After an upgrade from previous ZFS on Linux versions, the encryption
feature can be enabled per pool:
ZFS on Linux 版本 0.8.0 引入了对数据集原生加密的支持。升级自之前版本的 ZFS on Linux 后,可以针对每个存储池启用加密功能:
# zpool get feature@encryption tank NAME PROPERTY VALUE SOURCE tank feature@encryption disabled local # zpool set feature@encryption=enabled # zpool get feature@encryption tank NAME PROPERTY VALUE SOURCE tank feature@encryption enabled local
|
|
There is currently no support for booting from pools with encrypted
datasets using GRUB, and only limited support for automatically unlocking
encrypted datasets on boot. Older versions of ZFS without encryption support
will not be able to decrypt stored data. 目前不支持使用 GRUB 从带有加密数据集的存储池启动,并且仅有限支持在启动时自动解锁加密数据集。旧版本不支持加密的 ZFS 无法解密存储的数据。 |
|
|
It is recommended to either unlock storage datasets manually after
booting, or to write a custom unit to pass the key material needed for
unlocking on boot to zfs load-key. 建议在启动后手动解锁存储数据集,或编写自定义单元,在启动时将解锁所需的密钥材料传递给 zfs load-key。 |
|
|
Establish and test a backup procedure before enabling encryption of
production data. If the associated key material/passphrase/keyfile has been
lost, accessing the encrypted data is no longer possible. 在启用生产数据加密之前,先建立并测试备份程序。如果相关的密钥材料/密码/密钥文件丢失,将无法访问加密数据。 |
Encryption needs to be setup when creating datasets/zvols, and is inherited by
default to child datasets. For example, to create an encrypted dataset
tank/encrypted_data and configure it as storage in Proxmox VE, run the following
commands:
创建数据集/zvol 时需要设置加密,且默认会继承给子数据集。例如,要创建一个加密的数据集 tank/encrypted_data 并将其配置为 Proxmox VE 中的存储,运行以下命令:
# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data Enter passphrase: Re-enter passphrase: # pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
All guest volumes/disks create on this storage will be encrypted with the
shared key material of the parent dataset.
在此存储上创建的所有客户机卷/磁盘都将使用父数据集的共享密钥材料进行加密。
To actually use the storage, the associated key material needs to be loaded
and the dataset needs to be mounted. This can be done in one step with:
要实际使用该存储,需要加载相关的密钥材料并挂载数据集。这可以通过以下一步完成:
# zfs mount -l tank/encrypted_data Enter passphrase for 'tank/encrypted_data':
It is also possible to use a (random) keyfile instead of prompting for a
passphrase by setting the keylocation and keyformat properties, either at
creation time or with zfs change-key on existing datasets:
也可以通过设置 keylocation 和 keyformat 属性来使用(随机)密钥文件,而不是提示输入密码短语,这可以在创建时设置,也可以通过 zfs change-key 命令对现有数据集进行设置:
# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1 # zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
|
|
When using a keyfile, special care needs to be taken to secure the
keyfile against unauthorized access or accidental loss. Without the keyfile, it
is not possible to access the plaintext data! 使用密钥文件时,需要特别注意保护密钥文件,防止未经授权的访问或意外丢失。没有密钥文件,就无法访问明文数据! |
A guest volume created underneath an encrypted dataset will have its
encryptionroot property set accordingly. The key material only needs to be
loaded once per encryptionroot to be available to all encrypted datasets
underneath it.
在加密数据集下创建的客户卷,其 encryptionroot 属性会相应设置。密钥材料只需针对每个 encryptionroot 加载一次,即可供其下所有加密数据集使用。
See the encryptionroot, encryption, keylocation, keyformat and
keystatus properties, the zfs load-key, zfs unload-key and zfs
change-key commands and the Encryption section from man zfs for more
details and advanced usage.
有关更多详细信息和高级用法,请参阅 encryptionroot、encryption、keylocation、keyformat 和 keystatus 属性,zfs load-key、zfs unload-key 和 zfs change-key 命令,以及 man zfs 中的 Encryption 部分。
3.9.11. Compression in ZFS
3.9.11. ZFS 中的压缩
When compression is enabled on a dataset, ZFS tries to compress all new
blocks before writing them and decompresses them on reading. Already
existing data will not be compressed retroactively.
当在数据集上启用压缩时,ZFS 会尝试在写入之前压缩所有新块,并在读取时解压它们。已有的数据不会被回溯压缩。
You can enable compression with:
您可以通过以下方式启用压缩:
# zfs set compression=<algorithm> <dataset>
We recommend using the lz4 algorithm, because it adds very little CPU
overhead. Other algorithms like lzjb and gzip-N, where N is an
integer from 1 (fastest) to 9 (best compression ratio), are also
available. Depending on the algorithm and how compressible the data is,
having compression enabled can even increase I/O performance.
我们推荐使用 lz4 算法,因为它几乎不会增加 CPU 开销。其他算法如 lzjb 和 gzip-N(其中 N 是从 1(最快)到 9(最佳压缩比)的整数)也可用。根据算法和数据的可压缩性,启用压缩甚至可能提高 I/O 性能。
You can disable compression at any time with:
您可以随时通过以下方式禁用压缩:
# zfs set compression=off <dataset>
Again, only new blocks will be affected by this change.
同样,只有新的数据块会受到此更改的影响。
3.9.12. ZFS Special Device
3.9.12. ZFS 特殊设备
Since version 0.8.0 ZFS supports special devices. A special device in a
pool is used to store metadata, deduplication tables, and optionally small
file blocks.
自版本 0.8.0 起,ZFS 支持特殊设备。池中的特殊设备用于存储元数据、重复数据删除表,以及可选的小文件块。
A special device can improve the speed of a pool consisting of slow spinning
hard disks with a lot of metadata changes. For example workloads that involve
creating, updating or deleting a large number of files will benefit from the
presence of a special device. ZFS datasets can also be configured to store
whole small files on the special device which can further improve the
performance. Use fast SSDs for the special device.
特殊设备可以提升由转速较慢的硬盘组成且有大量元数据变更的存储池的速度。例如,涉及创建、更新或删除大量文件的工作负载将从特殊设备中受益。ZFS 数据集也可以配置为将整个小文件存储在特殊设备上,这可以进一步提升性能。特殊设备应使用高速 SSD。
|
|
The redundancy of the special device should match the one of the
pool, since the special device is a point of failure for the whole pool. 特殊设备的冗余应与存储池的冗余相匹配,因为特殊设备是整个存储池的单点故障。 |
|
|
Adding a special device to a pool cannot be undone! 向存储池添加特殊设备是不可撤销的! |
创建带有特殊设备和 RAID-1 的存储池:
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
向现有的池中添加一个特殊设备,使用 RAID-1:
# zpool add <pool> special mirror <device1> <device2>
ZFS datasets expose the special_small_blocks=<size> property. size can be
0 to disable storing small file blocks on the special device or a power of
two in the range between 512B to 1M. After setting the property new file
blocks smaller than size will be allocated on the special device.
ZFS 数据集暴露了 special_small_blocks=<size> 属性。size 可以设置为 0 以禁用在特殊设备上存储小文件块,或者设置为 512B 到 1M 范围内的 2 的幂。设置该属性后,新的小于 size 的文件块将分配到特殊设备上。
|
|
If the value for special_small_blocks is greater than or equal to
the recordsize (default 128K) of the dataset, all data will be written to
the special device, so be careful! 如果 special_small_blocks 的值大于或等于数据集的 recordsize(默认 128K),所有数据都会写入特殊设备,因此请小心! |
Setting the special_small_blocks property on a pool will change the default
value of that property for all child ZFS datasets (for example all containers
in the pool will opt in for small file blocks).
在池上设置 special_small_blocks 属性将更改该属性在所有子 ZFS 数据集中的默认值(例如,池中的所有容器都将选择使用小文件块)。
选择对整个池中小于 4K 块的所有文件启用:
# zfs set special_small_blocks=4K <pool>
为单个数据集选择启用小文件块:
# zfs set special_small_blocks=4K <pool>/<filesystem>
为单个数据集选择禁用小文件块:
# zfs set special_small_blocks=0 <pool>/<filesystem>
3.9.13. ZFS Pool Features
3.9.13. ZFS 池功能
Changes to the on-disk format in ZFS are only made between major version changes
and are specified through features. All features, as well as the general
mechanism are well documented in the zpool-features(5) manpage.
ZFS 中对磁盘格式的更改仅在主版本更改之间进行,并通过特性来指定。所有特性以及通用机制都在 zpool-features(5)手册页中有详细说明。
Since enabling new features can render a pool not importable by an older version
of ZFS, this needs to be done actively by the administrator, by running
zpool upgrade on the pool (see the zpool-upgrade(8) manpage).
由于启用新特性可能导致旧版本的 ZFS 无法导入该存储池,因此需要管理员主动通过在存储池上运行 zpool upgrade 来完成此操作(参见 zpool-upgrade(8)手册页)。
Unless you need to use one of the new features, there is no upside to enabling
them.
除非需要使用某个新特性,否则启用它们没有任何好处。
In fact, there are some downsides to enabling new features:
事实上,启用新特性还有一些缺点:
-
A system with root on ZFS, that still boots using GRUB will become unbootable if a new feature is active on the rpool, due to the incompatible implementation of ZFS in GRUB.
根文件系统位于 ZFS 上且仍使用 GRUB 启动的系统,如果在 rpool 上启用了新功能,将因 GRUB 中 ZFS 的不兼容实现而变得无法启动。 -
The system will not be able to import any upgraded pool when booted with an older kernel, which still ships with the old ZFS modules.
系统在使用仍带有旧 ZFS 模块的旧内核启动时,将无法导入任何升级后的存储池。 -
Booting an older Proxmox VE ISO to repair a non-booting system will likewise not work.
使用较旧的 Proxmox VE ISO 启动以修复无法启动的系统同样无效。
|
|
Do not upgrade your rpool if your system is still booted with
GRUB, as this will render your system unbootable. This includes systems
installed before Proxmox VE 5.4, and systems booting with legacy BIOS boot (see
how to determine the bootloader). 如果系统仍通过 GRUB 启动,请勿升级您的 rpool,因为这将导致系统无法启动。这包括在 Proxmox VE 5.4 之前安装的系统,以及使用传统 BIOS 启动的系统(请参阅如何确定启动加载程序)。 |
为 ZFS 池启用新功能:
# zpool upgrade <pool>
3.10. BTRFS
|
|
BTRFS integration is currently a technology preview in Proxmox VE. BTRFS 集成目前在 Proxmox VE 中处于技术预览阶段。 |
BTRFS is a modern copy on write file system natively supported by the Linux
kernel, implementing features such as snapshots, built-in RAID and self healing
via checksums for data and metadata. Starting with Proxmox VE 7.0, BTRFS is
introduced as optional selection for the root file system.
BTRFS 是一种现代的写时复制文件系统,Linux 内核原生支持,具备快照、内置 RAID 以及通过校验和对数据和元数据进行自我修复等功能。从 Proxmox VE 7.0 开始,BTRFS 被引入为根文件系统的可选项。
BTRFS 的一般优势
-
Main system setup almost identical to the traditional ext4 based setup
主系统设置几乎与传统的基于 ext4 的设置相同 -
Snapshots 快照
-
Data compression on file system level
文件系统级别的数据压缩 -
Copy-on-write clone 写时复制克隆
-
RAID0, RAID1 and RAID10
RAID0、RAID1 和 RAID10 -
Protection against data corruption
防止数据损坏 -
Self healing 自我修复
-
Natively supported by the Linux kernel
Linux 内核原生支持
-
RAID levels 5/6 are experimental and dangerous, see BTRFS Status
RAID 5/6 级别是实验性的且存在风险,详见 BTRFS 状态
3.10.1. Installation as Root File System
3.10.1. 作为根文件系统的安装
When you install using the Proxmox VE installer, you can choose BTRFS for the root
file system. You need to select the RAID type at installation time:
当您使用 Proxmox VE 安装程序安装时,可以选择 BTRFS 作为根文件系统。您需要在安装时选择 RAID 类型:
|
RAID0
|
Also called “striping”. The capacity of such volume is the sum
of the capacities of all disks. But RAID0 does not add any redundancy,
so the failure of a single drive makes the volume unusable.
|
|
RAID1
|
Also called “mirroring”. Data is written identically to all
disks. This mode requires at least 2 disks with the same size. The
resulting capacity is that of a single disk.
|
|
RAID10
|
A combination of RAID0 and RAID1. Requires at least 4 disks.
|
The installer automatically partitions the disks and creates an additional
subvolume at /var/lib/pve/local-btrfs. In order to use that with the Proxmox VE
tools, the installer creates the following configuration entry in
/etc/pve/storage.cfg:
安装程序会自动对磁盘进行分区,并在 /var/lib/pve/local-btrfs 创建一个额外的子卷。为了使 Proxmox VE 工具能够使用该子卷,安装程序在 /etc/pve/storage.cfg 中创建了以下配置条目:
dir: local
path /var/lib/vz
content iso,vztmpl,backup
disable
btrfs: local-btrfs
path /var/lib/pve/local-btrfs
content iso,vztmpl,backup,images,rootdir
This explicitly disables the default local storage in favor of a BTRFS
specific storage entry on the additional subvolume.
这明确禁用了默认的本地存储,转而使用额外子卷上的 BTRFS 专用存储条目。
The btrfs command is used to configure and manage the BTRFS file system,
After the installation, the following command lists all additional subvolumes:
btrfs 命令用于配置和管理 BTRFS 文件系统,安装完成后,以下命令列出所有额外的子卷:
# btrfs subvolume list / ID 256 gen 6 top level 5 path var/lib/pve/local-btrfs
3.10.2. BTRFS Administration
3.10.2. BTRFS 管理
This section gives you some usage examples for common tasks.
本节为您提供一些常见任务的使用示例。
Creating a BTRFS file system
创建 BTRFS 文件系统
To create BTRFS file systems, mkfs.btrfs is used. The -d and -m parameters
are used to set the profile for metadata and data respectively. With the
optional -L parameter, a label can be set.
创建 BTRFS 文件系统时,使用 mkfs.btrfs。-d 和 -m 参数分别用于设置数据和元数据的配置文件。通过可选的 -L 参数,可以设置标签。
Generally, the following modes are supported: single, raid0, raid1,
raid10.
通常支持以下模式:single、raid0、raid1、raid10。
Create a BTRFS file system on a single disk /dev/sdb with the label
My-Storage:
在单个磁盘 /dev/sdb 上创建一个标签为 My-Storage 的 BTRFS 文件系统:
# mkfs.btrfs -m single -d single -L My-Storage /dev/sdb
Or create a RAID1 on the two partitions /dev/sdb1 and /dev/sdc1:
或者在两个分区 /dev/sdb1 和 /dev/sdc1 上创建一个 RAID1:
# mkfs.btrfs -m raid1 -d raid1 -L My-Storage /dev/sdb1 /dev/sdc1
Mounting a BTRFS file system
挂载 BTRFS 文件系统
The new file-system can then be mounted either manually, for example:
然后可以手动挂载新的文件系统,例如:
# mkdir /my-storage # mount /dev/sdb /my-storage
A BTRFS can also be added to /etc/fstab like any other mount point,
automatically mounting it on boot. It’s recommended to avoid using
block-device paths but use the UUID value the mkfs.btrfs command printed,
especially there is more than one disk in a BTRFS setup.
BTRFS 也可以像其他挂载点一样添加到 /etc/fstab 中,开机时自动挂载。建议避免使用块设备路径,而使用 mkfs.btrfs 命令打印的 UUID 值,尤其是在 BTRFS 配置中有多个磁盘时。
For example: 例如:
# ... other mount points left out for brevity # using the UUID from the mkfs.btrfs output is highly recommended UUID=e2c0c3ff-2114-4f54-b767-3a203e49f6f3 /my-storage btrfs defaults 0 0
|
|
If you do not have the UUID available anymore you can use the blkid tool
to list all properties of block-devices. 如果你不再有 UUID,可以使用 blkid 工具列出所有块设备的属性。 |
Afterwards you can trigger the first mount by executing:
之后,您可以通过执行以下命令来触发首次挂载:
mount /my-storage
After the next reboot this will be automatically done by the system at boot.
下一次重启后,系统将在启动时自动完成此操作。
Adding a BTRFS file system to Proxmox VE
向 Proxmox VE 添加 BTRFS 文件系统
You can add an existing BTRFS file system to Proxmox VE via the web interface, or
using the CLI, for example:
您可以通过网页界面或使用命令行界面将现有的 BTRFS 文件系统添加到 Proxmox VE,例如:
pvesm add btrfs my-storage --path /my-storage
Creating a subvolume 创建子卷
Creating a subvolume links it to a path in the BTRFS file system, where it will
appear as a regular directory.
创建子卷会将其链接到 BTRFS 文件系统中的一个路径,该路径将显示为一个普通目录。
# btrfs subvolume create /some/path
Afterwards /some/path will act like a regular directory.
之后 /some/path 将表现得像一个普通目录。
Deleting a subvolume 删除子卷
Contrary to directories removed via rmdir, subvolumes do not need to be empty
in order to be deleted via the btrfs command.
与通过 rmdir 删除的目录不同,子卷不需要为空即可通过 btrfs 命令删除。
# btrfs subvolume delete /some/path
Creating a snapshot of a subvolume
创建子卷的快照
BTRFS does not actually distinguish between snapshots and normal subvolumes, so
taking a snapshot can also be seen as creating an arbitrary copy of a subvolume.
By convention, Proxmox VE will use the read-only flag when creating snapshots of
guest disks or subvolumes, but this flag can also be changed later on.
BTRFS 实际上并不区分快照和普通子卷,因此创建快照也可以看作是创建子卷的任意副本。按照惯例,Proxmox VE 在创建来宾磁盘或子卷的快照时会使用只读标志,但该标志以后也可以更改。
# btrfs subvolume snapshot -r /some/path /a/new/path
This will create a read-only "clone" of the subvolume on /some/path at
/a/new/path. Any future modifications to /some/path cause the modified data
to be copied before modification.
这将在 /a/new/path 创建 /some/path 上子卷的只读“克隆”。对 /some/path 的任何后续修改都会在修改前复制被修改的数据。
If the read-only (-r) option is left out, both subvolumes will be writable.
如果省略只读(-r)选项,两个子卷都将是可写的。
Enabling compression 启用压缩
By default, BTRFS does not compress data. To enable compression, the compress
mount option can be added. Note that data already written will not be compressed
after the fact.
默认情况下,BTRFS 不会压缩数据。要启用压缩,可以添加 compress 挂载选项。请注意,已经写入的数据不会事后被压缩。
By default, the rootfs will be listed in /etc/fstab as follows:
默认情况下,rootfs 会在 /etc/fstab 中列出如下:
UUID=<uuid of your root file system> / btrfs defaults 0 1
You can simply append compress=zstd, compress=lzo, or compress=zlib to the
defaults above like so:
你可以简单地在上述默认值后面添加 compress=zstd、compress=lzo 或 compress=zlib,如下所示:
UUID=<uuid of your root file system> / btrfs defaults,compress=zstd 0 1
This change will take effect after rebooting.
此更改将在重启后生效。
3.11. Proxmox Node Management
3.11. Proxmox 节点管理
The Proxmox VE node management tool (pvenode) allows you to control node specific
settings and resources.
Proxmox VE 节点管理工具(pvenode)允许您控制节点特定的设置和资源。
Currently pvenode allows you to set a node’s description, run various
bulk operations on the node’s guests, view the node’s task history, and
manage the node’s SSL certificates, which are used for the API and the web GUI
through pveproxy.
目前,pvenode 允许您设置节点的描述,对节点的虚拟机执行各种批量操作,查看节点的任务历史记录,以及管理节点的 SSL 证书,这些证书通过 pveproxy 用于 API 和网页图形界面。
3.11.1. Wake-on-LAN 3.11.1. 局域网唤醒(Wake-on-LAN)
Wake-on-LAN (WoL) allows you to switch on a sleeping computer in the network, by
sending a magic packet. At least one NIC must support this feature, and the
respective option needs to be enabled in the computer’s firmware (BIOS/UEFI)
configuration. The option name can vary from Enable Wake-on-Lan to
Power On By PCIE Device; check your motherboard’s vendor manual, if you’re
unsure. ethtool can be used to check the WoL configuration of <interface>
by running:
Wake-on-LAN(WoL)允许您通过发送魔术包来唤醒网络中处于睡眠状态的计算机。至少有一个网卡必须支持此功能,并且需要在计算机的固件(BIOS/UEFI)配置中启用相应选项。该选项名称可能从“启用 Wake-on-LAN”到“通过 PCIE 设备开机”不等;如果不确定,请查阅主板厂商手册。可以使用 ethtool 运行以下命令检查<interface>的 WoL 配置:
ethtool <interface> | grep Wake-on
pvenode allows you to wake sleeping members of a cluster via WoL, using the
command:
pvenode 允许您通过 WoL 唤醒集群中处于睡眠状态的成员,使用命令:
pvenode wakeonlan <node>
This broadcasts the WoL magic packet on UDP port 9, containing the MAC address
of <node> obtained from the wakeonlan property. The node-specific
wakeonlan property can be set using the following command:
该命令在 UDP 端口 9 上广播包含从 wakeonlan 属性获取的<node>的 MAC 地址的 WoL 魔术包。可以使用以下命令设置节点特定的 wakeonlan 属性:
pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX
The interface via which to send the WoL packet is determined from the default
route. It can be overwritten by setting the bind-interface via the following
command:
发送 WoL 包的接口由默认路由确定。可以通过以下命令设置 bind-interface 来覆盖该接口:
pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,bind-interface=<iface-name>
The broadcast address (default 255.255.255.255) used when sending the WoL
packet can further be changed by setting the broadcast-address explicitly
using the following command:
发送 WoL 数据包时使用的广播地址(默认 255.255.255.255)可以通过以下命令显式设置 broadcast-address 来进一步更改:
pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,broadcast-address=<broadcast-address>
3.11.2. Task History 3.11.2. 任务历史
When troubleshooting server issues, for example, failed backup jobs, it can
often be helpful to have a log of the previously run tasks. With Proxmox VE, you can
access the nodes’s task history through the pvenode task command.
在排查服务器问题时,例如备份任务失败,查看之前运行任务的日志通常非常有帮助。使用 Proxmox VE,您可以通过 pvenode task 命令访问节点的任务历史。
You can get a filtered list of a node’s finished tasks with the list
subcommand. For example, to get a list of tasks related to VM 100
that ended with an error, the command would be:
您可以使用 list 子命令获取节点已完成任务的筛选列表。例如,要获取与虚拟机 100 相关且以错误结束的任务列表,命令如下:
pvenode task list --errors --vmid 100
The log of a task can then be printed using its UPID:
可以使用任务的 UPID 来打印该任务的日志:
pvenode task log UPID:pve1:00010D94:001CA6EA:6124E1B9:vzdump:100:root@pam:
3.11.3. Bulk Guest Power Management
3.11.3. 批量虚拟机电源管理
In case you have many VMs/containers, starting and stopping guests can be
carried out in bulk operations with the startall and stopall subcommands of
pvenode. By default, pvenode startall will only start VMs/containers which
have been set to automatically start on boot (see
Automatic Start and Shutdown of Virtual Machines),
however, you can override this behavior with the --force flag. Both commands
also have a --vms option, which limits the stopped/started guests to the
specified VMIDs.
如果您有许多虚拟机/容器,可以使用 pvenode 的 startall 和 stopall 子命令批量启动和停止虚拟机。默认情况下,pvenode startall 只会启动已设置为开机自动启动的虚拟机/容器(参见虚拟机的自动启动和关闭),但是,您可以使用--force 标志来覆盖此行为。两个命令都具有--vms 选项,该选项将停止/启动的虚拟机限制为指定的 VMID。
For example, to start VMs 100, 101, and 102, regardless of whether they
have onboot set, you can use:
例如,要启动虚拟机 100、101 和 102,无论它们是否设置了 onboot,都可以使用:
pvenode startall --vms 100,101,102 --force
To stop these guests (and any other guests that may be running), use the
command:
要停止这些来宾(以及可能正在运行的任何其他来宾),请使用以下命令:
pvenode stopall
|
|
The stopall command first attempts to perform a clean shutdown and then
waits until either all guests have successfully shut down or an overridable
timeout (3 minutes by default) has expired. Once that happens and the
force-stop parameter is not explicitly set to 0 (false), all virtual guests
that are still running are hard stopped. stopall 命令首先尝试执行干净的关机操作,然后等待直到所有来宾成功关闭或可覆盖的超时(默认 3 分钟)到期。一旦发生这种情况,且未显式将 force-stop 参数设置为 0(false),所有仍在运行的虚拟来宾将被强制停止。 |
3.11.4. First Guest Boot Delay
3.11.4. 第一个来宾启动延迟
In case your VMs/containers rely on slow-to-start external resources, for
example an NFS server, you can also set a per-node delay between the time Proxmox VE
boots and the time the first VM/container that is configured to autostart boots
(see Automatic Start and Shutdown of Virtual Machines).
如果您的虚拟机/容器依赖于启动较慢的外部资源,例如 NFS 服务器,您还可以设置每个节点的延迟时间,即 Proxmox VE 启动与配置为自动启动的第一个虚拟机/容器启动之间的时间间隔(参见虚拟机的自动启动和关闭)。
You can achieve this by setting the following (where 10 represents the delay
in seconds):
你可以通过设置以下内容来实现(其中 10 表示延迟秒数):
pvenode config set --startall-onboot-delay 10
3.11.5. Bulk Guest Migration
3.11.5. 批量虚拟机迁移
In case an upgrade situation requires you to migrate all of your guests from one
node to another, pvenode also offers the migrateall subcommand for bulk
migration. By default, this command will migrate every guest on the system to
the target node. It can however be set to only migrate a set of guests.
如果升级情况需要你将所有虚拟机从一个节点迁移到另一个节点,pvenode 还提供了 migrateall 子命令用于批量迁移。默认情况下,该命令会将系统上的每个虚拟机迁移到目标节点。不过,也可以设置只迁移一部分虚拟机。
For example, to migrate VMs 100, 101, and 102, to the node pve2, with
live-migration for local disks enabled, you can run:
例如,要将虚拟机 100、101 和 102 迁移到节点 pve2,并启用本地磁盘的实时迁移,可以运行:
pvenode migrateall pve2 --vms 100,101,102 --with-local-disks
3.11.6. RAM Usage Target for Ballooning
3.11.6. 气球内存使用目标
The target percentage for automatic memory allocation
defaults to 80%. You can customize this target per node by setting the
ballooning-target property. For example, to target 90% host memory usage
instead:
自动内存分配的目标百分比默认为 80%。您可以通过设置气球内存目标属性(ballooning-target)为每个节点自定义此目标。例如,要将目标设置为 90%的主机内存使用率:
pvenode config set --ballooning-target 90
3.12. Certificate Management
3.12. 证书管理
3.12.1. Certificates for Intra-Cluster Communication
3.12.1. 集群内部通信的证书
Each Proxmox VE cluster creates by default its own (self-signed) Certificate
Authority (CA) and generates a certificate for each node which gets signed by
the aforementioned CA. These certificates are used for encrypted communication
with the cluster’s pveproxy service and the Shell/Console feature if SPICE is
used.
每个 Proxmox VE 集群默认会创建自己的(自签名)证书颁发机构(CA),并为每个节点生成一个由上述 CA 签名的证书。这些证书用于与集群的 pveproxy 服务以及在使用 SPICE 时的 Shell/控制台功能进行加密通信。
The CA certificate and key are stored in the Proxmox Cluster File System (pmxcfs).
CA 证书和密钥存储在 Proxmox 集群文件系统(pmxcfs)中。
3.12.2. Certificates for API and Web GUI
3.12.2. API 和 Web GUI 的证书
The REST API and web GUI are provided by the pveproxy service, which runs on
each node.
REST API 和 Web GUI 由运行在每个节点上的 pveproxy 服务提供。
You have the following options for the certificate used by pveproxy:
您可以为 pveproxy 使用以下证书选项:
-
By default the node-specific certificate in /etc/pve/nodes/NODENAME/pve-ssl.pem is used. This certificate is signed by the cluster CA and therefore not automatically trusted by browsers and operating systems.
默认情况下,使用位于 /etc/pve/nodes/NODENAME/pve-ssl.pem 的节点特定证书。该证书由集群 CA 签发,因此浏览器和操作系统不会自动信任它。 -
use an externally provided certificate (e.g. signed by a commercial CA).
使用外部提供的证书(例如由商业 CA 签发的证书)。 -
use ACME (Let’s Encrypt) to get a trusted certificate with automatic renewal, this is also integrated in the Proxmox VE API and web interface.
使用 ACME(Let’s Encrypt)获取受信任的证书并实现自动续期,这也集成在 Proxmox VE API 和网页界面中。
For options 2 and 3 the file /etc/pve/local/pveproxy-ssl.pem (and
/etc/pve/local/pveproxy-ssl.key, which needs to be without password) is used.
对于选项 2 和 3,使用文件 /etc/pve/local/pveproxy-ssl.pem(以及需要无密码的 /etc/pve/local/pveproxy-ssl.key)。
|
|
Keep in mind that /etc/pve/local is a node specific symlink to
/etc/pve/nodes/NODENAME. 请记住,/etc/pve/local 是指向 /etc/pve/nodes/NODENAME 的节点特定符号链接。 |
Certificates are managed with the Proxmox VE Node management command
(see the pvenode(1) manpage).
证书通过 Proxmox VE 节点管理命令进行管理(参见 pvenode(1) 手册页)。
|
|
Do not replace or manually modify the automatically generated node
certificate files in /etc/pve/local/pve-ssl.pem and
/etc/pve/local/pve-ssl.key or the cluster CA files in
/etc/pve/pve-root-ca.pem and /etc/pve/priv/pve-root-ca.key. 请勿替换或手动修改 /etc/pve/local/pve-ssl.pem 和 /etc/pve/local/pve-ssl.key 中自动生成的节点证书文件,或 /etc/pve/pve-root-ca.pem 和 /etc/pve/priv/pve-root-ca.key 中的集群 CA 文件。 |
3.12.3. Upload Custom Certificate
3.12.3. 上传自定义证书
If you already have a certificate which you want to use for a Proxmox VE node you
can upload that certificate simply over the web interface.
如果您已经有一个想用于 Proxmox VE 节点的证书,可以通过网页界面简单地上传该证书。
3.12.4. Trusted certificates via Let’s Encrypt (ACME)
3.12.4. 通过 Let’s Encrypt(ACME)获取受信任的证书
Proxmox VE includes an implementation of the Automatic Certificate
Management Environment ACME protocol, allowing Proxmox VE admins to
use an ACME provider like Let’s Encrypt for easy setup of TLS certificates
which are accepted and trusted on modern operating systems and web browsers
out of the box.
Proxmox VE 包含了自动证书管理环境(ACME)协议的实现,允许 Proxmox VE 管理员使用像 Let’s Encrypt 这样的 ACME 提供商,轻松设置 TLS 证书,这些证书在现代操作系统和网页浏览器中开箱即用,受到接受和信任。
Currently, the two ACME endpoints implemented are the
Let’s Encrypt (LE) production and its staging
environment. Our ACME client supports validation of http-01 challenges using
a built-in web server and validation of dns-01 challenges using a DNS plugin
supporting all the DNS API endpoints acme.sh does.
目前,实现的两个 ACME 端点是 Let’s Encrypt(LE)生产环境及其测试环境。我们的 ACME 客户端支持使用内置的 Web 服务器验证 http-01 挑战,以及使用支持 acme.sh 所有 DNS API 端点的 DNS 插件验证 dns-01 挑战。
ACME Account ACME 账户
You need to register an ACME account per cluster with the endpoint you want to
use. The email address used for that account will serve as contact point for
renewal-due or similar notifications from the ACME endpoint.
您需要为每个集群使用想要使用的端点注册一个 ACME 账户。该账户使用的电子邮件地址将作为 ACME 端点发出续订到期或类似通知的联系点。
You can register and deactivate ACME accounts over the web interface
Datacenter -> ACME or using the pvenode command-line tool.
您可以通过网页界面 Datacenter -> ACME 注册和停用 ACME 账户,也可以使用 pvenode 命令行工具。
pvenode acme account register account-name mail@example.com
|
|
Because of rate-limits you
should use LE staging for experiments or if you use ACME for the first time. 由于速率限制,您应在实验或首次使用 ACME 时使用 LE 测试环境。 |
ACME Plugins ACME 插件
The ACME plugins task is to provide automatic verification that you, and thus
the Proxmox VE cluster under your operation, are the real owner of a domain. This is
the basis building block for automatic certificate management.
ACME 插件的任务是自动验证您本人,以及您所管理的 Proxmox VE 集群,是真正的域名所有者。这是自动证书管理的基础构建模块。
The ACME protocol specifies different types of challenges, for example the
http-01 where a web server provides a file with a certain content to prove
that it controls a domain. Sometimes this isn’t possible, either because of
technical limitations or if the address of a record to is not reachable from
the public internet. The dns-01 challenge can be used in these cases. This
challenge is fulfilled by creating a certain DNS record in the domain’s zone.
ACME 协议指定了不同类型的挑战,例如 http-01 挑战,其中 Web 服务器提供一个包含特定内容的文件,以证明其控制某个域名。有时这不可行,可能是由于技术限制,或者记录的地址无法从公共互联网访问。在这些情况下,可以使用 dns-01 挑战。该挑战通过在域的区域中创建特定的 DNS 记录来完成。
Proxmox VE supports both of those challenge types out of the box, you can configure
plugins either over the web interface under Datacenter -> ACME, or using the
pvenode acme plugin add command.
Proxmox VE 开箱即支持这两种挑战类型,您可以通过 Web 界面在“数据中心 -> ACME”下配置插件,或者使用 pvenode acme plugin add 命令进行配置。
ACME Plugin configurations are stored in /etc/pve/priv/acme/plugins.cfg.
A plugin is available for all nodes in the cluster.
ACME 插件配置存储在 /etc/pve/priv/acme/plugins.cfg 文件中。插件对集群中的所有节点均可用。
Node Domains 节点域
Each domain is node specific. You can add new or manage existing domain entries
under Node -> Certificates, or using the pvenode config command.
每个域都是节点特定的。您可以在节点 -> 证书下添加新的或管理现有的域条目,或者使用 pvenode config 命令。
After configuring the desired domain(s) for a node and ensuring that the
desired ACME account is selected, you can order your new certificate over the
web interface. On success the interface will reload after 10 seconds.
在为节点配置好所需的域并确保选择了所需的 ACME 账户后,您可以通过网页界面订购新的证书。成功后,界面将在 10 秒后重新加载。
Renewal will happen automatically.
续订将自动进行。
3.12.5. ACME HTTP Challenge Plugin
3.12.5. ACME HTTP 挑战插件
There is always an implicitly configured standalone plugin for validating
http-01 challenges via the built-in webserver spawned on port 80.
总是有一个隐式配置的独立插件,用于通过内置的 Web 服务器(监听端口 80)验证 http-01 挑战。
|
|
The name standalone means that it can provide the validation on it’s
own, without any third party service. So, this plugin works also for cluster
nodes. “独立”一词意味着它可以自行提供验证服务,无需任何第三方服务。因此,该插件也适用于集群节点。 |
There are a few prerequisites to use it for certificate management with Let’s
Encrypts ACME.
使用它进行 Let’s Encrypt 的 ACME 证书管理有一些前提条件。
-
You have to accept the ToS of Let’s Encrypt to register an account.
您必须接受 Let’s Encrypt 的服务条款才能注册账户。 -
Port 80 of the node needs to be reachable from the internet.
节点的 80 端口需要能够从互联网访问。 -
There must be no other listener on port 80.
80 端口上不能有其他监听程序。 -
The requested (sub)domain needs to resolve to a public IP of the Node.
请求的(子)域名需要解析到节点的公网 IP。
3.12.6. ACME DNS API Challenge Plugin
3.12.6. ACME DNS API 挑战插件
On systems where external access for validation via the http-01 method is
not possible or desired, it is possible to use the dns-01 validation method.
This validation method requires a DNS server that allows provisioning of TXT
records via an API.
在无法或不希望通过 http-01 方法进行外部访问验证的系统上,可以使用 dns-01 验证方法。此验证方法需要一个允许通过 API 配置 TXT 记录的 DNS 服务器。
Configuring ACME DNS APIs for validation
配置 ACME DNS API 进行验证
Proxmox VE re-uses the DNS plugins developed for the acme.sh
[7] project, please
refer to its documentation for details on configuration of specific APIs.
Proxmox VE 复用了 acme.sh [7] 项目开发的 DNS 插件,具体 API 配置请参阅其文档。
The easiest way to configure a new plugin with the DNS API is using the web
interface (Datacenter -> ACME).
配置带有 DNS API 的新插件最简单的方法是使用网页界面(数据中心 -> ACME)。
Choose DNS as challenge type. Then you can select your API provider, enter
the credential data to access your account over their API.
The validation delay determines the time in seconds between setting the DNS
record and prompting the ACME provider to validate it, as providers often need
some time to propagate the record in their infrastructure.
选择 DNS 作为挑战类型。然后您可以选择您的 API 提供商,输入凭证数据以通过其 API 访问您的账户。验证延迟决定了设置 DNS 记录与提示 ACME 提供商验证之间的秒数,因为提供商通常需要一些时间在其基础设施中传播该记录。
|
|
See the acme.sh
How to use DNS API
wiki for more detailed information about getting API credentials for your
provider. 有关获取您提供商的 API 凭证的更详细信息,请参阅 acme.sh 如何使用 DNS API 维基。 |
As there are many DNS providers and API endpoints Proxmox VE automatically generates
the form for the credentials for some providers. For the others you will see a
bigger text area, simply copy all the credentials KEY=VALUE pairs in there.
由于存在许多 DNS 提供商和 API 端点,Proxmox VE 会自动为某些提供商生成凭证表单。对于其他提供商,您将看到一个较大的文本区域,只需将所有凭证的 KEY=VALUE 键值对复制到其中即可。
DNS Validation through CNAME Alias
通过 CNAME 别名进行 DNS 验证
A special alias mode can be used to handle the validation on a different
domain/DNS server, in case your primary/real DNS does not support provisioning
via an API. Manually set up a permanent CNAME record for
_acme-challenge.domain1.example pointing to _acme-challenge.domain2.example
and set the alias property on the corresponding acmedomainX key in the
Proxmox VE node configuration file to domain2.example to allow the DNS server of
domain2.example to validate all challenges for domain1.example.
可以使用一种特殊的别名模式来处理不同域名/DNS 服务器上的验证,以防您的主/真实 DNS 不支持通过 API 进行配置。手动为_acme-challenge.domain1.example 设置一个永久的 CNAME 记录,指向_acme-challenge.domain2.example,并在 Proxmox VE 节点配置文件中对应的 acmedomainX 键上设置 alias 属性为 domain2.example,以允许 domain2.example 的 DNS 服务器验证 domain1.example 的所有挑战。
Combination of Plugins 插件组合
Combining http-01 and dns-01 validation is possible in case your node is
reachable via multiple domains with different requirements / DNS provisioning
capabilities. Mixing DNS APIs from multiple providers or instances is also
possible by specifying different plugin instances per domain.
如果您的节点可以通过多个具有不同要求/DNS 配置能力的域名访问,则可以结合使用 http-01 和 dns-01 验证。通过为每个域名指定不同的插件实例,也可以混合使用来自多个提供商或实例的 DNS API。
|
|
Accessing the same service over multiple domains increases complexity and
should be avoided if possible. 通过多个域名访问同一服务会增加复杂性,应尽可能避免。 |
3.12.7. Automatic renewal of ACME certificates
3.12.7. ACME 证书的自动续期
If a node has been successfully configured with an ACME-provided certificate
(either via pvenode or via the GUI), the certificate will be automatically
renewed by the pve-daily-update.service. Currently, renewal will be attempted
if the certificate has expired already, or will expire in the next 30 days.
如果节点已成功配置了由 ACME 提供的证书(无论是通过 pvenode 还是通过 GUI),该证书将由 pve-daily-update.service 自动续期。目前,如果证书已过期或将在接下来的 30 天内过期,将尝试续期。
|
|
If you are using a custom directory that issues short-lived certificates,
disabling the random delay for the pve-daily-update.timer unit might be
advisable to avoid missing a certificate renewal after a reboot. 如果您使用的是颁发短期证书的自定义目录,建议禁用 pve-daily-update.timer 单元的随机延迟,以避免重启后错过证书续期。 |
3.12.8. ACME Examples with pvenode
3.12.8. 使用 pvenode 的 ACME 示例
Example: Sample pvenode invocation for using Let’s Encrypt certificates
示例:使用 Let’s Encrypt 证书的 pvenode 调用示例
root@proxmox:~# pvenode acme account register default mail@example.invalid Directory endpoints: 0) Let's Encrypt V2 (https://acme-v02.api.letsencrypt.org/directory) 1) Let's Encrypt V2 Staging (https://acme-staging-v02.api.letsencrypt.org/directory) 2) Custom Enter selection: 1 Terms of Service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf Do you agree to the above terms? [y|N]y ... Task OK root@proxmox:~# pvenode config set --acme domains=example.invalid root@proxmox:~# pvenode acme cert order Loading ACME account details Placing ACME order ... Status is 'valid'! All domains validated! ... Downloading certificate Setting pveproxy certificate and key Restarting pveproxy Task OK
Example: Setting up the OVH API for validating a domain
示例:设置 OVH API 以验证域名
|
|
the account registration steps are the same no matter which plugins are
used, and are not repeated here. 无论使用哪种插件,账户注册步骤都是相同的,此处不再重复。 |
|
|
OVH_AK and OVH_AS need to be obtained from OVH according to the OVH
API documentation OVH_AK 和 OVH_AS 需要根据 OVH API 文档从 OVH 获取。 |
First you need to get all information so you and Proxmox VE can access the API.
首先,您需要获取所有信息,以便您和 Proxmox VE 可以访问 API。
root@proxmox:~# cat /path/to/api-token
OVH_AK=XXXXXXXXXXXXXXXX
OVH_AS=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
root@proxmox:~# source /path/to/api-token
root@proxmox:~# curl -XPOST -H"X-Ovh-Application: $OVH_AK" -H "Content-type: application/json" \
https://eu.api.ovh.com/1.0/auth/credential -d '{
"accessRules": [
{"method": "GET","path": "/auth/time"},
{"method": "GET","path": "/domain"},
{"method": "GET","path": "/domain/zone/*"},
{"method": "GET","path": "/domain/zone/*/record"},
{"method": "POST","path": "/domain/zone/*/record"},
{"method": "POST","path": "/domain/zone/*/refresh"},
{"method": "PUT","path": "/domain/zone/*/record/"},
{"method": "DELETE","path": "/domain/zone/*/record/*"}
]
}'
{"consumerKey":"ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ","state":"pendingValidation","validationUrl":"https://eu.api.ovh.com/auth/?credentialToken=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"}
(open validation URL and follow instructions to link Application Key with account/Consumer Key)
root@proxmox:~# echo "OVH_CK=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ" >> /path/to/api-token
Now you can setup the the ACME plugin:
现在您可以设置 ACME 插件:
root@proxmox:~# pvenode acme plugin add dns example_plugin --api ovh --data /path/to/api_token root@proxmox:~# pvenode acme plugin config example_plugin ┌────────┬──────────────────────────────────────────┐ │ key │ value │ ╞════════╪══════════════════════════════════════════╡ │ api │ ovh │ ├────────┼──────────────────────────────────────────┤ │ data │ OVH_AK=XXXXXXXXXXXXXXXX │ │ │ OVH_AS=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY │ │ │ OVH_CK=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ │ ├────────┼──────────────────────────────────────────┤ │ digest │ 867fcf556363ca1bea866863093fcab83edf47a1 │ ├────────┼──────────────────────────────────────────┤ │ plugin │ example_plugin │ ├────────┼──────────────────────────────────────────┤ │ type │ dns │ └────────┴──────────────────────────────────────────┘
At last you can configure the domain you want to get certificates for and
place the certificate order for it:
最后,您可以配置您想要获取证书的域名,并为其下达证书订单:
root@proxmox:~# pvenode config set -acmedomain0 example.proxmox.com,plugin=example_plugin root@proxmox:~# pvenode acme cert order Loading ACME account details Placing ACME order Order URL: https://acme-staging-v02.api.letsencrypt.org/acme/order/11111111/22222222 Getting authorization details from 'https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/33333333' The validation for example.proxmox.com is pending! [Wed Apr 22 09:25:30 CEST 2020] Using OVH endpoint: ovh-eu [Wed Apr 22 09:25:30 CEST 2020] Checking authentication [Wed Apr 22 09:25:30 CEST 2020] Consumer key is ok. [Wed Apr 22 09:25:31 CEST 2020] Adding record [Wed Apr 22 09:25:32 CEST 2020] Added, sleep 10 seconds. Add TXT record: _acme-challenge.example.proxmox.com Triggering validation Sleeping for 5 seconds Status is 'valid'! [Wed Apr 22 09:25:48 CEST 2020] Using OVH endpoint: ovh-eu [Wed Apr 22 09:25:48 CEST 2020] Checking authentication [Wed Apr 22 09:25:48 CEST 2020] Consumer key is ok. Remove TXT record: _acme-challenge.example.proxmox.com All domains validated! Creating CSR Checking order status Order is ready, finalizing order valid! Downloading certificate Setting pveproxy certificate and key Restarting pveproxy Task OK
Example: Switching from the staging to the regular ACME directory
示例:从测试环境切换到常规 ACME 目录
Changing the ACME directory for an account is unsupported, but as Proxmox VE
supports more than one account you can just create a new one with the
production (trusted) ACME directory as endpoint. You can also deactivate the
staging account and recreate it.
更改账户的 ACME 目录是不支持的,但由于 Proxmox VE 支持多个账户,您可以直接创建一个以生产(受信任)ACME 目录为端点的新账户。您也可以停用暂存账户并重新创建它。
示例:使用 pvenode 将默认 ACME 账户从暂存更改为目录
root@proxmox:~# pvenode acme account deactivate default Renaming account file from '/etc/pve/priv/acme/default' to '/etc/pve/priv/acme/_deactivated_default_4' Task OK root@proxmox:~# pvenode acme account register default example@proxmox.com Directory endpoints: 0) Let's Encrypt V2 (https://acme-v02.api.letsencrypt.org/directory) 1) Let's Encrypt V2 Staging (https://acme-staging-v02.api.letsencrypt.org/directory) 2) Custom Enter selection: 0 Terms of Service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf Do you agree to the above terms? [y|N]y ... Task OK
3.13. Host Bootloader 3.13. 主机引导加载程序
Proxmox VE currently uses one of two bootloaders depending on the disk setup
selected in the installer.
Proxmox VE 目前根据安装程序中选择的磁盘设置使用两种引导加载程序之一。
For EFI Systems installed with ZFS as the root filesystem systemd-boot is
used, unless Secure Boot is enabled. All other deployments use the standard
GRUB bootloader (this usually also applies to systems which are installed on
top of Debian).
对于使用 ZFS 作为根文件系统的 EFI 系统,除非启用了安全启动,否则使用 systemd-boot。所有其他部署使用标准的 GRUB 引导加载程序(这通常也适用于安装在 Debian 之上的系统)。
3.13.1. Partitioning Scheme Used by the Installer
3.13.1. 安装程序使用的分区方案
The Proxmox VE installer creates 3 partitions on all disks selected for
installation.
Proxmox VE 安装程序会在所有选定用于安装的磁盘上创建 3 个分区。
The created partitions are:
创建的分区包括:
-
a 1 MB BIOS Boot Partition (gdisk type EF02)
一个 1 MB 的 BIOS 启动分区(gdisk 类型 EF02) -
a 512 MB EFI System Partition (ESP, gdisk type EF00)
一个 512 MB 的 EFI 系统分区(ESP,gdisk 类型 EF00) -
a third partition spanning the set hdsize parameter or the remaining space used for the chosen storage type
第三个分区跨越设置的 hdsize 参数或所选存储类型的剩余空间
Systems using ZFS as root filesystem are booted with a kernel and initrd image
stored on the 512 MB EFI System Partition. For legacy BIOS systems, and EFI
systems with Secure Boot enabled, GRUB is used, for EFI systems without
Secure Boot, systemd-boot is used. Both are installed and configured to point
to the ESPs.
使用 ZFS 作为根文件系统的系统通过存储在 512 MB EFI 系统分区上的内核和 initrd 镜像启动。对于传统 BIOS 系统以及启用安全启动的 EFI 系统,使用 GRUB;对于未启用安全启动的 EFI 系统,使用 systemd-boot。两者都已安装并配置为指向 ESP。
GRUB in BIOS mode (--target i386-pc) is installed onto the BIOS Boot
Partition of all selected disks on all systems booted with GRUB
[8].
BIOS 模式下的 GRUB(--target i386-pc)安装在所有使用 GRUB 启动的系统所选磁盘的 BIOS 启动分区上[8]。
3.13.2. Synchronizing the content of the ESP with proxmox-boot-tool
3.13.2. 使用 proxmox-boot-tool 同步 ESP 内容
proxmox-boot-tool is a utility used to keep the contents of the EFI System
Partitions properly configured and synchronized. It copies certain kernel
versions to all ESPs and configures the respective bootloader to boot from
the vfat formatted ESPs. In the context of ZFS as root filesystem this means
that you can use all optional features on your root pool instead of the subset
which is also present in the ZFS implementation in GRUB or having to create a
separate small boot-pool [9].
proxmox-boot-tool 是一个用于保持 EFI 系统分区内容正确配置和同步的工具。它将某些内核版本复制到所有 ESP,并配置相应的引导加载程序从 vfat 格式的 ESP 启动。在以 ZFS 作为根文件系统的环境中,这意味着您可以在根池上使用所有可选功能,而不是仅使用 GRUB 中 ZFS 实现的子集,或者必须创建一个单独的小型启动池[9]。
In setups with redundancy all disks are partitioned with an ESP, by the
installer. This ensures the system boots even if the first boot device fails
or if the BIOS can only boot from a particular disk.
在具有冗余的设置中,安装程序会为所有磁盘分区创建 ESP。这确保即使第一个启动设备失败或 BIOS 只能从特定磁盘启动,系统仍能启动。
The ESPs are not kept mounted during regular operation. This helps to prevent
filesystem corruption to the vfat formatted ESPs in case of a system crash,
and removes the need to manually adapt /etc/fstab in case the primary boot
device fails.
ESP 在正常运行期间不会保持挂载状态。这有助于防止系统崩溃时对 vfat 格式的 ESP 文件系统造成损坏,并且在主启动设备故障时无需手动修改 /etc/fstab。
proxmox-boot-tool handles the following tasks:
proxmox-boot-tool 负责以下任务:
-
formatting and setting up a new partition
格式化并设置新的分区 -
copying and configuring new kernel images and initrd images to all listed ESPs
将新的内核镜像和 initrd 镜像复制并配置到所有列出的 ESP 中 -
synchronizing the configuration on kernel upgrades and other maintenance tasks
在内核升级和其他维护任务时同步配置 -
managing the list of kernel versions which are synchronized
管理同步的内核版本列表 -
configuring the boot-loader to boot a particular kernel version (pinning)
配置引导加载程序以启动特定的内核版本(固定)
You can view the currently configured ESPs and their state by running:
您可以通过运行以下命令查看当前配置的 ESP 及其状态:
# proxmox-boot-tool status
设置一个新的分区以用作同步 ESP
To format and initialize a partition as synced ESP, e.g., after replacing a
failed vdev in an rpool, or when converting an existing system that pre-dates
the sync mechanism, proxmox-boot-tool from proxmox-kernel-helper can be used.
要将分区格式化并初始化为同步 ESP,例如,在替换 rpool 中失败的 vdev 后,或在转换早于同步机制的现有系统时,可以使用来自 proxmox-kernel-helper 的 proxmox-boot-tool。
|
|
the format command will format the <partition>, make sure to pass
in the right device/partition! format 命令将格式化<partition>,请确保传入正确的设备/分区! |
For example, to format an empty partition /dev/sda2 as ESP, run the following:
例如,要将空分区/dev/sda2 格式化为 ESP,请运行以下命令:
# proxmox-boot-tool format /dev/sda2
To setup an existing, unmounted ESP located on /dev/sda2 for inclusion in
Proxmox VE’s kernel update synchronization mechanism, use the following:
要设置位于 /dev/sda2 上的现有未挂载 ESP,以便将其包含在 Proxmox VE 的内核更新同步机制中,请使用以下命令:
# proxmox-boot-tool init /dev/sda2
or 或者
# proxmox-boot-tool init /dev/sda2 grub
to force initialization with GRUB instead of systemd-boot, for example for
Secure Boot support.
以强制使用 GRUB 初始化而非 systemd-boot,例如用于支持安全启动。
Afterwards /etc/kernel/proxmox-boot-uuids should contain a new line with the
UUID of the newly added partition. The init command will also automatically
trigger a refresh of all configured ESPs.
之后,/etc/kernel/proxmox-boot-uuids 文件中应包含一行新添加分区的 UUID。init 命令还会自动触发对所有已配置 ESP 的刷新。
更新所有 ESP 上的配置
To copy and configure all bootable kernels and keep all ESPs listed in
/etc/kernel/proxmox-boot-uuids in sync you just need to run:
要复制和配置所有可启动的内核,并保持/etc/kernel/proxmox-boot-uuids 中列出的所有 ESP 同步,只需运行:
# proxmox-boot-tool refresh
(The equivalent to running update-grub systems with ext4 or xfs on root).
(相当于在根目录使用 ext4 或 xfs 的系统上运行 update-grub)。
This is necessary should you make changes to the kernel commandline, or want to
sync all kernels and initrds.
如果您对内核命令行进行了更改,或想同步所有内核和 initrd,这是必要的操作。
|
|
Both update-initramfs and apt (when necessary) will automatically
trigger a refresh. update-initramfs 和 apt(在必要时)都会自动触发刷新。 |
proxmox-boot-tool 考虑的内核版本
The following kernel versions are configured by default:
默认配置的内核版本如下:
-
the currently running kernel
当前正在运行的内核 -
the version being newly installed on package updates
正在安装的版本用于包更新 -
the two latest already installed kernels
已经安装的两个最新内核 -
the latest version of the second-to-last kernel series (e.g. 5.0, 5.3), if applicable
倒数第二个内核系列的最新版本(例如 5.0、5.3),如果适用 -
any manually selected kernels
任何手动选择的内核
手动保持内核可启动
Should you wish to add a certain kernel and initrd image to the list of
bootable kernels use proxmox-boot-tool kernel add.
如果您希望将某个内核和 initrd 镜像添加到可启动内核列表中,请使用 proxmox-boot-tool kernel add。
For example run the following to add the kernel with ABI version 5.0.15-1-pve
to the list of kernels to keep installed and synced to all ESPs:
例如,运行以下命令将 ABI 版本为 5.0.15-1-pve 的内核添加到保持安装并同步到所有 ESP 的内核列表中:
# proxmox-boot-tool kernel add 5.0.15-1-pve
proxmox-boot-tool kernel list will list all kernel versions currently selected
for booting:
proxmox-boot-tool kernel list 会列出当前选定用于启动的所有内核版本:
# proxmox-boot-tool kernel list Manually selected kernels: 5.0.15-1-pve Automatically selected kernels: 5.0.12-1-pve 4.15.18-18-pve
Run proxmox-boot-tool kernel remove to remove a kernel from the list of
manually selected kernels, for example:
运行 proxmox-boot-tool kernel remove 从手动选择的内核列表中移除内核,例如:
# proxmox-boot-tool kernel remove 5.0.15-1-pve
|
|
It’s required to run proxmox-boot-tool refresh to update all EFI System
Partitions (ESPs) after a manual kernel addition or removal from above. 在手动添加或移除内核后,必须运行 proxmox-boot-tool refresh 来更新所有 EFI 系统分区(ESP)。 |
3.13.3. Determine which Bootloader is Used
3.13.3. 确定使用的引导加载程序
The simplest and most reliable way to determine which bootloader is used, is to
watch the boot process of the Proxmox VE node.
确定使用哪种引导加载程序最简单且最可靠的方法是观察 Proxmox VE 节点的启动过程。
You will either see the blue box of GRUB or the simple black on white
systemd-boot.
你将看到蓝色的 GRUB 框,或者简单的黑底白字的 systemd-boot。
Determining the bootloader from a running system might not be 100% accurate. The
safest way is to run the following command:
从正在运行的系统中确定引导加载程序可能不是 100%准确。最安全的方法是运行以下命令:
# efibootmgr -v
If it returns a message that EFI variables are not supported, GRUB is used in
BIOS/Legacy mode.
如果返回消息显示不支持 EFI 变量,则表示 GRUB 在 BIOS/传统模式下使用。
If the output contains a line that looks similar to the following, GRUB is
used in UEFI mode.
如果输出中包含类似以下的行,则表示 GRUB 在 UEFI 模式下使用。
Boot0005* proxmox [...] File(\EFI\proxmox\grubx64.efi)
If the output contains a line similar to the following, systemd-boot is used.
如果输出包含类似以下的行,则表示使用了 systemd-boot。
Boot0006* Linux Boot Manager [...] File(\EFI\systemd\systemd-bootx64.efi)
By running: 通过运行:
# proxmox-boot-tool status
you can find out if proxmox-boot-tool is configured, which is a good
indication of how the system is booted.
你可以查明 proxmox-boot-tool 是否已配置,这可以很好地指示系统的启动方式。
3.13.4. GRUB
GRUB has been the de-facto standard for booting Linux systems for many years
and is quite well documented
[10].
多年来,GRUB 一直是启动 Linux 系统的事实标准,并且有相当完善的文档[10]。
Configuration 配置
Changes to the GRUB configuration are done via the defaults file
/etc/default/grub or config snippets in /etc/default/grub.d. To regenerate
the configuration file after a change to the configuration run:
[11]
对 GRUB 配置的更改通过默认文件 /etc/default/grub 或 /etc/default/grub.d 中的配置片段完成。更改配置后,运行以下命令以重新生成配置文件:[11]
# update-grub
3.13.5. Systemd-boot
systemd-boot is a lightweight EFI bootloader. It reads the kernel and initrd
images directly from the EFI Service Partition (ESP) where it is installed.
The main advantage of directly loading the kernel from the ESP is that it does
not need to reimplement the drivers for accessing the storage. In Proxmox VE
proxmox-boot-tool is used to keep the
configuration on the ESPs synchronized.
systemd-boot 是一个轻量级的 EFI 引导加载程序。它直接从安装所在的 EFI 系统分区(ESP)读取内核和 initrd 镜像。直接从 ESP 加载内核的主要优点是无需重新实现访问存储设备的驱动程序。在 Proxmox VE 中,使用 proxmox-boot-tool 来保持 ESP 上的配置同步。
Configuration 配置
systemd-boot is configured via the file loader/loader.conf in the root
directory of an EFI System Partition (ESP). See the loader.conf(5) manpage
for details.
systemd-boot 通过 EFI 系统分区(ESP)根目录下的文件 loader/loader.conf 进行配置。详情请参见 loader.conf(5) 手册页。
Each bootloader entry is placed in a file of its own in the directory
loader/entries/
每个引导加载项都放置在 loader/entries/ 目录下的单独文件中。
An example entry.conf looks like this (/ refers to the root of the ESP):
一个示例 entry.conf 如下所示(/ 指的是 ESP 的根目录):
title Proxmox version 5.0.15-1-pve options root=ZFS=rpool/ROOT/pve-1 boot=zfs linux /EFI/proxmox/5.0.15-1-pve/vmlinuz-5.0.15-1-pve initrd /EFI/proxmox/5.0.15-1-pve/initrd.img-5.0.15-1-pve
3.13.6. Editing the Kernel Commandline
3.13.6. 编辑内核命令行
You can modify the kernel commandline in the following places, depending on the
bootloader used:
您可以根据所使用的引导加载程序,在以下位置修改内核命令行:
The kernel commandline needs to be placed in the variable
GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running
update-grub appends its content to all linux entries in
/boot/grub/grub.cfg.
内核命令行需要放置在文件 /etc/default/grub 中的变量 GRUB_CMDLINE_LINUX_DEFAULT 中。运行 update-grub 会将其内容追加到 /boot/grub/grub.cfg 中的所有 linux 条目。
The kernel commandline needs to be placed as one line in /etc/kernel/cmdline.
To apply your changes, run proxmox-boot-tool refresh, which sets it as the
option line for all config files in loader/entries/proxmox-*.conf.
内核命令行需要作为一行放置在 /etc/kernel/cmdline 中。要应用更改,运行 proxmox-boot-tool refresh,该命令会将其设置为 loader/entries/proxmox-*.conf 中所有配置文件的选项行。
A complete list of kernel parameters can be found at
https://www.kernel.org/doc/html/v<YOUR-KERNEL-VERSION>/admin-guide/kernel-parameters.html.
replace <YOUR-KERNEL-VERSION> with the major.minor version, for example, for
kernels based on version 6.5 the URL would be:
https://www.kernel.org/doc/html/v6.5/admin-guide/kernel-parameters.html
完整的内核参数列表可以在 https://www.kernel.org/doc/html/v<YOUR-KERNEL-VERSION>/admin-guide/kernel-parameters.html 找到。将 <YOUR-KERNEL-VERSION> 替换为主版本号.次版本号,例如,对于基于版本 6.5 的内核,URL 为:https://www.kernel.org/doc/html/v6.5/admin-guide/kernel-parameters.html
You can find your kernel version by checking the web interface (Node →
Summary), or by running
您可以通过检查网页界面(节点 → 概览)来查看您的内核版本,或者运行
# uname -r
Use the first two numbers at the front of the output.
使用输出结果开头的前两个数字。
3.13.7. Override the Kernel-Version for next Boot
3.13.7. 覆盖下一次启动的内核版本
To select a kernel that is not currently the default kernel, you can either:
要选择当前非默认的内核,您可以:
-
use the boot loader menu that is displayed at the beginning of the boot process
使用启动过程中显示的引导加载程序菜单 -
use the proxmox-boot-tool to pin the system to a kernel version either once or permanently (until pin is reset).
使用 proxmox-boot-tool 将系统固定到某个内核版本,可以选择一次性固定或永久固定(直到重置固定)。
This should help you work around incompatibilities between a newer kernel
version and the hardware.
这应能帮助你解决较新内核版本与硬件之间的不兼容问题。
|
|
Such a pin should be removed as soon as possible so that all current
security patches of the latest kernel are also applied to the system. 应尽快移除此类固定,以便系统能够应用最新内核的所有当前安全补丁。 |
For example: To permanently select the version 5.15.30-1-pve for booting you
would run:
例如:要永久选择版本 5.15.30-1-pve 进行启动,您可以运行:
# proxmox-boot-tool kernel pin 5.15.30-1-pve
|
|
The pinning functionality works for all Proxmox VE systems, not only those using
proxmox-boot-tool to synchronize the contents of the ESPs, if your system
does not use proxmox-boot-tool for synchronizing you can also skip the
proxmox-boot-tool refresh call in the end. 该固定功能适用于所有 Proxmox VE 系统,不仅限于使用 proxmox-boot-tool 同步 ESP 内容的系统,如果您的系统不使用 proxmox-boot-tool 进行同步,也可以跳过最后的 proxmox-boot-tool 刷新调用。 |
You can also set a kernel version to be booted on the next system boot only.
This is for example useful to test if an updated kernel has resolved an issue,
which caused you to pin a version in the first place:
您还可以设置仅在下一次系统启动时启动某个内核版本。例如,这对于测试更新的内核是否解决了导致您最初固定版本的问题非常有用:
# proxmox-boot-tool kernel pin 5.15.30-1-pve --next-boot
To remove any pinned version configuration use the unpin subcommand:
要移除任何固定的版本配置,请使用 unpin 子命令:
# proxmox-boot-tool kernel unpin
While unpin has a --next-boot option as well, it is used to clear a pinned
version set with --next-boot. As that happens already automatically on boot,
invoking it manually is of little use.
虽然 unpin 也有一个 --next-boot 选项,但它用于清除使用 --next-boot 设置的固定版本。由于这在启动时已经自动发生,手动调用它几乎没有用处。
After setting, or clearing pinned versions you also need to synchronize the
content and configuration on the ESPs by running the refresh subcommand.
在设置或清除固定版本后,还需要通过运行 refresh 子命令来同步 ESP 上的内容和配置。
|
|
You will be prompted to automatically do for proxmox-boot-tool managed
systems if you call the tool interactively. 如果你以交互方式调用该工具,系统会提示你自动为 proxmox-boot-tool 管理的系统执行此操作。 |
# proxmox-boot-tool refresh
3.13.8. Secure Boot 3.13.8. 安全启动
Since Proxmox VE 8.1, Secure Boot is supported out of the box via signed packages
and integration in proxmox-boot-tool.
自 Proxmox VE 8.1 起,通过签名包和 proxmox-boot-tool 的集成,开箱即支持安全启动。
The following packages are required for secure boot to work. You can
install them all at once by using the ‘proxmox-secure-boot-support’
meta-package.
以下软件包是安全启动正常工作的必需品。您可以使用“proxmox-secure-boot-support”元包一次性安装它们。
-
shim-signed (shim bootloader signed by Microsoft)
shim-signed(由 Microsoft 签名的 shim 引导程序) -
shim-helpers-amd64-signed (fallback bootloader and MOKManager, signed by Proxmox)
shim-helpers-amd64-signed(备用引导程序和 MOKManager,由 Proxmox 签名) -
grub-efi-amd64-signed (GRUB EFI bootloader, signed by Proxmox)
grub-efi-amd64-signed(GRUB EFI 引导加载程序,由 Proxmox 签名) -
proxmox-kernel-6.X.Y-Z-pve-signed (Kernel image, signed by Proxmox)
proxmox-kernel-6.X.Y-Z-pve-signed(内核镜像,由 Proxmox 签名)
Only GRUB is supported as bootloader out of the box, since other bootloader are
currently not eligible for secure boot code-signing.
开箱即用仅支持 GRUB 作为引导加载程序,因为其他引导加载程序目前不符合安全启动代码签名的要求。
Any new installation of Proxmox VE will automatically have all of the above packages
included.
任何新的 Proxmox VE 安装都会自动包含上述所有软件包。
More details about how Secure Boot works, and how to customize the setup, are
available in our wiki.
关于安全启动的工作原理以及如何自定义设置的更多详细信息,请参阅我们的维基。
Switching an Existing Installation to Secure Boot
切换现有安装到安全启动
|
|
This can lead to an unbootable installation in some cases if not done
correctly. Reinstalling the host will setup Secure Boot automatically if
available, without any extra interactions. Make sure you have a working and
well-tested backup of your Proxmox VE host! 如果操作不当,某些情况下可能导致安装无法启动。如果可用,重新安装主机将自动设置安全启动,无需额外操作。请确保您有一个可用且经过充分测试的 Proxmox VE 主机备份! |
An existing UEFI installation can be switched over to Secure Boot if desired,
without having to reinstall Proxmox VE from scratch.
如果需要,现有的 UEFI 安装可以切换到安全启动,无需从头重新安装 Proxmox VE。
First, ensure all your system is up-to-date. Next, install
proxmox-secure-boot-support. GRUB automatically creates the needed EFI boot
entry for booting via the default shim.
首先,确保您的系统已全部更新。接下来,安装 proxmox-secure-boot-support。GRUB 会自动创建所需的 EFI 启动项,以通过默认的 shim 启动。
If systemd-boot is used as a bootloader (see
Determine which Bootloader is used),
some additional setup is needed. This is only the case if Proxmox VE was installed
with ZFS-on-root.
如果使用 systemd-boot 作为引导加载程序(参见确定使用的是哪个引导加载程序),则需要进行一些额外的设置。只有在使用 ZFS-on-root 安装 Proxmox VE 的情况下才会出现这种情况。
To check the latter, run:
要检查后者,请运行:
# findmnt /
If the host is indeed using ZFS as root filesystem, the FSTYPE column
should contain zfs:
如果主机确实使用 ZFS 作为根文件系统,FSTYPE 列应包含 zfs:
TARGET SOURCE FSTYPE OPTIONS / rpool/ROOT/pve-1 zfs rw,relatime,xattr,noacl,casesensitive
Next, a suitable potential ESP (EFI system partition) must be found. This can be
done using the lsblk command as following:
接下来,必须找到一个合适的潜在 ESP(EFI 系统分区)。这可以使用 lsblk 命令完成,方法如下:
# lsblk -o +FSTYPE
The output should look something like this:
输出应类似如下:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE sda 8:0 0 32G 0 disk ├─sda1 8:1 0 1007K 0 part ├─sda2 8:2 0 512M 0 part vfat └─sda3 8:3 0 31.5G 0 part zfs_member sdb 8:16 0 32G 0 disk ├─sdb1 8:17 0 1007K 0 part ├─sdb2 8:18 0 512M 0 part vfat └─sdb3 8:19 0 31.5G 0 part zfs_member
In this case, the partitions sda2 and sdb2 are the targets. They can be
identified by the their size of 512M and their FSTYPE being vfat, in this
case on a ZFS RAID-1 installation.
在这种情况下,分区 sda2 和 sdb2 是目标。它们可以通过大小为 512M 以及 FSTYPE 为 vfat 来识别,在此例中为 ZFS RAID-1 安装。
These partitions must be properly set up for booting through GRUB using
proxmox-boot-tool. This command (using sda2 as an example) must be run
separately for each individual ESP:
这些分区必须通过 proxmox-boot-tool 正确设置以便通过 GRUB 启动。此命令(以 sda2 为例)必须针对每个单独的 ESP 分区分别运行:
# proxmox-boot-tool init /dev/sda2 grub
Afterwards, you can sanity-check the setup by running the following command:
之后,您可以通过运行以下命令来进行设置的合理性检查:
# efibootmgr -v
This list should contain an entry looking similar to this:
该列表应包含类似如下的条目:
[..] Boot0009* proxmox HD(2,GPT,..,0x800,0x100000)/File(\EFI\proxmox\shimx64.efi) [..]
|
|
The old systemd-boot bootloader will be kept, but GRUB will be
preferred. This way, if booting using GRUB in Secure Boot mode does not work for
any reason, the system can still be booted using systemd-boot with Secure Boot
turned off. 旧的 systemd-boot 启动加载程序将被保留,但优先使用 GRUB。这样,如果由于任何原因在安全启动模式下使用 GRUB 启动失败,系统仍然可以在关闭安全启动的情况下使用 systemd-boot 启动。 |
Now the host can be rebooted and Secure Boot enabled in the UEFI firmware setup
utility.
现在可以重启主机,并在 UEFI 固件设置工具中启用安全启动。
On reboot, a new entry named proxmox should be selectable in the UEFI firmware
boot menu, which boots using the pre-signed EFI shim.
重启后,UEFI 固件启动菜单中应可选择一个名为 proxmox 的新条目,该条目使用预签名的 EFI shim 启动。
If, for any reason, no proxmox entry can be found in the UEFI boot menu, you
can try adding it manually (if supported by the firmware), by adding the file
\EFI\proxmox\shimx64.efi as a custom boot entry.
如果由于某种原因,在 UEFI 启动菜单中找不到 proxmox 条目,可以尝试手动添加(如果固件支持),方法是将文件\EFI\proxmox\shimx64.efi 添加为自定义启动条目。
|
|
Some UEFI firmwares are known to drop the proxmox boot option on reboot.
This can happen if the proxmox boot entry is pointing to a GRUB installation
on a disk, where the disk itself is not a boot option. If possible, try adding
the disk as a boot option in the UEFI firmware setup utility and run
proxmox-boot-tool again. 已知某些 UEFI 固件在重启时会丢失 proxmox 启动选项。如果 proxmox 启动条目指向磁盘上的 GRUB 安装,而该磁盘本身不是启动选项,就可能发生这种情况。如果可能,尝试在 UEFI 固件设置工具中将该磁盘添加为启动选项,然后再次运行 proxmox-boot-tool。 |
|
|
To enroll custom keys, see the accompanying
Secure
Boot wiki page. 要注册自定义密钥,请参阅随附的安全启动(Secure Boot)维基页面。 |
Using DKMS/Third Party Modules With Secure Boot
使用 DKMS/第三方模块与安全启动
On systems with Secure Boot enabled, the kernel will refuse to load modules
which are not signed by a trusted key. The default set of modules shipped with
the kernel packages is signed with an ephemeral key embedded in the kernel
image which is trusted by that specific version of the kernel image.
在启用安全启动的系统上,内核将拒绝加载未由受信任密钥签名的模块。内核包中默认提供的模块集使用嵌入在内核映像中的临时密钥进行签名,该密钥被该特定版本的内核映像信任。
In order to load other modules, such as those built with DKMS or manually, they
need to be signed with a key trusted by the Secure Boot stack. The easiest way
to achieve this is to enroll them as Machine Owner Key (MOK) with mokutil.
为了加载其他模块,例如使用 DKMS 构建的或手动构建的模块,需要使用安全启动堆栈信任的密钥进行签名。实现此目的的最简单方法是使用 mokutil 将它们注册为机器所有者密钥(MOK)。
The dkms tool will automatically generate a keypair and certificate in
/var/lib/dkms/mok.key and /var/lib/dkms/mok.pub and use it for signing
the kernel modules it builds and installs.
dkms 工具会自动在 /var/lib/dkms/mok.key 和 /var/lib/dkms/mok.pub 中生成一对密钥和证书,并使用它们对其构建和安装的内核模块进行签名。
You can view the certificate contents with
您可以使用以下命令查看证书内容
# openssl x509 -in /var/lib/dkms/mok.pub -noout -text
and enroll it on your system using the following command:
并使用以下命令在系统上注册该证书:
# mokutil --import /var/lib/dkms/mok.pub input password: input password again:
The mokutil command will ask for a (temporary) password twice, this password
needs to be entered one more time in the next step of the process! Rebooting
the system should automatically boot into the MOKManager EFI binary, which
allows you to verify the key/certificate and confirm the enrollment using the
password selected when starting the enrollment using mokutil. Afterwards, the
kernel should allow loading modules built with DKMS (which are signed with the
enrolled MOK). The MOK can also be used to sign custom EFI binaries and
kernel images if desired.
mokutil 命令会要求输入两次(临时)密码,该密码需要在下一步骤中再次输入!重启系统后,应自动启动到 MOKManager EFI 二进制程序,允许您验证密钥/证书并使用在启动注册时通过 mokutil 选择的密码确认注册。之后,内核应允许加载使用 DKMS 构建(并用已注册的 MOK 签名)的模块。MOK 也可以用于签署自定义的 EFI 二进制文件和内核镜像(如果需要)。
The same procedure can also be used for custom/third-party modules not managed
with DKMS, but the key/certificate generation and signing steps need to be done
manually in that case.
同样的步骤也可以用于不通过 DKMS 管理的自定义/第三方模块,但在这种情况下,密钥/证书的生成和签名步骤需要手动完成。
3.14. Kernel Samepage Merging (KSM)
3.14. 内核同页合并(KSM)
Kernel Samepage Merging (KSM) is an optional memory deduplication feature
offered by the Linux kernel, which is enabled by default in Proxmox VE. KSM
works by scanning a range of physical memory pages for identical content, and
identifying the virtual pages that are mapped to them. If identical pages are
found, the corresponding virtual pages are re-mapped so that they all point to
the same physical page, and the old pages are freed. The virtual pages are
marked as "copy-on-write", so that any writes to them will be written to a new
area of memory, leaving the shared physical page intact.
内核同页合并(KSM)是 Linux 内核提供的一项可选内存去重功能,在 Proxmox VE 中默认启用。KSM 通过扫描一段物理内存页,查找内容相同的页面,并识别映射到这些页面的虚拟页。如果发现相同的页面,则将对应的虚拟页重新映射,使它们都指向同一个物理页面,并释放旧的页面。虚拟页被标记为“写时复制”,因此对它们的任何写操作都会写入新的内存区域,保持共享的物理页面不变。
3.14.1. Implications of KSM
3.14.1. KSM 的影响
KSM can optimize memory usage in virtualization environments, as multiple VMs
running similar operating systems or workloads could potentially share a lot of
common memory pages.
KSM 可以优化虚拟化环境中的内存使用,因为运行类似操作系统或工作负载的多个虚拟机可能共享大量相同的内存页。
However, while KSM can reduce memory usage, it also comes with some security
risks, as it can expose VMs to side-channel attacks. Research has shown that it
is possible to infer information about a running VM via a second VM on the same
host, by exploiting certain characteristics of KSM.
然而,虽然 KSM 可以减少内存使用,但它也带来一些安全风险,因为它可能使虚拟机暴露于侧信道攻击。研究表明,可以通过同一主机上的第二个虚拟机,利用 KSM 的某些特性推断正在运行的虚拟机的信息。
Thus, if you are using Proxmox VE to provide hosting services, you should consider
disabling KSM, in order to provide your users with additional security.
Furthermore, you should check your country’s regulations, as disabling KSM may
be a legal requirement.
因此,如果您使用 Proxmox VE 提供托管服务,应考虑禁用 KSM,以为用户提供额外的安全保障。此外,您还应检查所在国家的相关法规,因为禁用 KSM 可能是法律要求。
3.14.2. Disabling KSM 3.14.2. 禁用 KSM
To see if KSM is active, you can check the output of:
要查看 KSM 是否处于活动状态,可以检查以下输出:
# systemctl status ksmtuned
If it is, it can be disabled immediately with:
如果是,可以立即通过以下命令禁用:
# systemctl disable --now ksmtuned
Finally, to unmerge all the currently merged pages, run:
最后,要取消所有当前合并的页面,请运行:
# echo 2 > /sys/kernel/mm/ksm/run
4. Graphical User Interface
4. 图形用户界面
Proxmox VE is simple. There is no need to install a separate management
tool, and everything can be done through your web browser (Latest
Firefox or Google Chrome is preferred). A built-in HTML5 console is
used to access the guest console. As an alternative,
SPICE can be used.
Proxmox VE 很简单。无需安装单独的管理工具,所有操作都可以通过您的网页浏览器完成(推荐使用最新的 Firefox 或 Google Chrome)。内置的 HTML5 控制台用于访问客户机控制台。作为替代方案,也可以使用 SPICE。
Because we use the Proxmox cluster file system (pmxcfs), you can
connect to any node to manage the entire cluster. Each node can manage
the entire cluster. There is no need for a dedicated manager node.
由于我们使用 Proxmox 集群文件系统(pmxcfs),您可以连接到任何节点来管理整个集群。每个节点都可以管理整个集群,无需专用的管理节点。
You can use the web-based administration interface with any modern
browser. When Proxmox VE detects that you are connecting from a mobile
device, you are redirected to a simpler, touch-based user interface.
您可以使用任何现代浏览器访问基于网页的管理界面。当 Proxmox VE 检测到您是从移动设备连接时,会自动跳转到更简洁、基于触摸的用户界面。
The web interface can be reached via https://youripaddress:8006
(default login is: root, and the password is specified during the
installation process).
网页界面可以通过 https://youripaddress:8006 访问(默认登录名为 root,密码在安装过程中指定)。
4.1. Features 4.1. 功能
-
Seamless integration and management of Proxmox VE clusters
Proxmox VE 集群的无缝集成与管理 -
AJAX technologies for dynamic updates of resources
用于资源动态更新的 AJAX 技术 -
Secure access to all Virtual Machines and Containers via SSL encryption (https)
通过 SSL 加密(https)实现对所有虚拟机和容器的安全访问 -
Fast search-driven interface, capable of handling hundreds and probably thousands of VMs
快速的搜索驱动界面,能够处理数百甚至数千台虚拟机 -
Secure HTML5 console or SPICE
安全的 HTML5 控制台或 SPICE -
Role based permission management for all objects (VMs, storages, nodes, etc.)
基于角色的权限管理,适用于所有对象(虚拟机、存储、节点等) -
Support for multiple authentication sources (e.g. local, MS ADS, LDAP, …)
支持多种认证来源(例如本地、MS ADS、LDAP 等) -
Two-Factor Authentication (OATH, Yubikey)
双因素认证(OATH,Yubikey) -
Based on ExtJS 7.x JavaScript framework
基于 ExtJS 7.x JavaScript 框架
4.2. Login 4.2. 登录
When you connect to the server, you will first see the login window.
Proxmox VE supports various authentication backends (Realm), and
you can select the language here. The GUI is translated to more
than 20 languages.
当您连接到服务器时,首先会看到登录窗口。Proxmox VE 支持多种认证后端(Realm),您可以在此选择语言。图形界面已被翻译成 20 多种语言。
|
|
You can save the user name on the client side by selecting the
checkbox at the bottom. This saves some typing when you login next
time. 您可以通过勾选底部的复选框将用户名保存在客户端。这样下次登录时可以减少输入。 |
4.3. GUI Overview 4.3. 用户界面概述
|
Header
标题栏 |
On top. Shows status information and contains buttons for
most important actions.
|
|
Resource Tree
资源树 |
At the left side. A navigation tree where you can select
specific objects.
|
|
Content Panel
内容面板 |
Center region. Selected objects display configuration
options and status here.
|
|
Log Panel
日志面板 |
At the bottom. Displays log entries for recent tasks. You
can double-click on those log entries to get more details, or to abort
a running task.
|
|
|
You can shrink and expand the size of the resource tree and log
panel, or completely hide the log panel. This can be helpful when you
work on small displays and want more space to view other content. 您可以缩小和展开资源树和日志面板的大小,或完全隐藏日志面板。当您在小屏幕上工作并希望有更多空间查看其他内容时,这会很有帮助。 |
4.3.1. Header 4.3.1. 头部
On the top left side, the first thing you see is the Proxmox
logo. Next to it is the current running version of Proxmox VE. In the
search bar nearside you can search for specific objects (VMs,
containers, nodes, …). This is sometimes faster than selecting an
object in the resource tree.
在左上角,首先看到的是 Proxmox 标志。旁边显示的是当前运行的 Proxmox VE 版本。在靠近的搜索栏中,您可以搜索特定的对象(虚拟机、容器、节点等)。这有时比在资源树中选择对象更快。
The right part of the header contains four buttons:
头部的右侧包含四个按钮:
|
Documentation
文档 |
Opens a new browser window showing the reference documentation.
|
|
Create VM
创建虚拟机 |
Opens the virtual machine creation wizard.
|
|
Create CT
创建 CT |
Open the container creation wizard.
|
|
User Menu
用户菜单 |
Displays the identity of the user you’re currently logged in
with, and clicking it opens a menu with user-specific options.
In the user menu, you’ll find the My Settings dialog, which provides local UI
settings. Below that, there are shortcuts for TFA (Two-Factor Authentication)
and Password self-service. You’ll also find options to change the Language
and the Color Theme. Finally, at the bottom of the menu is the Logout
option. |
4.3.2. My Settings 4.3.2. 我的设置
The My Settings window allows you to set locally stored settings. These
include the Dashboard Storages which allow you to enable or disable specific
storages to be counted towards the total amount visible in the datacenter
summary. If no storage is checked the total is the sum of all storages, same
as enabling every single one.
“我的设置”窗口允许您设置本地存储的设置。这些设置包括仪表板存储,您可以启用或禁用特定存储,以决定它们是否计入数据中心摘要中显示的总量。如果没有选中任何存储,则总量为所有存储的总和,效果等同于启用所有存储。
Below the dashboard settings you find the stored user name and a button to
clear it as well as a button to reset every layout in the GUI to its default.
在仪表板设置下方,您可以看到已存储的用户名及一个清除按钮,还有一个将图形用户界面中所有布局重置为默认的按钮。
On the right side there are xterm.js Settings. These contain the following
options:
右侧是 xterm.js 设置,包含以下选项:
|
Font-Family
字体系列 |
The font to be used in xterm.js (e.g. Arial).
|
|
Font-Size
字体大小 |
The preferred font size to be used.
|
|
Letter Spacing
字母间距 |
Increases or decreases spacing between letters in text.
|
|
Line Height
行高 |
Specify the absolute height of a line.
|
4.3.3. Resource Tree 4.3.3. 资源树
This is the main navigation tree. On top of the tree you can select
some predefined views, which change the structure of the tree
below. The default view is the Server View, and it shows the following
object types:
这是主导航树。在树的顶部,您可以选择一些预定义的视图,这些视图会改变下面树的结构。默认视图是服务器视图,它显示以下对象类型:
|
Datacenter
数据中心 |
Contains cluster-wide settings (relevant for all nodes).
|
|
Node
节点 |
Represents the hosts inside a cluster, where the guests run.
|
|
Guest
客体 |
VMs, containers and templates.
|
|
Storage
存储 |
Data Storage. 数据存储。 |
|
Pool
资源池 |
It is possible to group guests using a pool to simplify
management.
|
The following view types are available:
以下视图类型可用:
|
Server View
服务器视图 |
Shows all kinds of objects, grouped by nodes.
|
|
Folder View
文件夹视图 |
Shows all kinds of objects, grouped by object type.
|
|
Pool View
资源池视图 |
Show VMs and containers, grouped by pool.
|
|
Tag View
标签视图 |
Show VMs and containers, grouped by tags.
|
4.3.4. Log Panel 4.3.4. 日志面板
The main purpose of the log panel is to show you what is currently
going on in your cluster. Actions like creating an new VM are executed
in the background, and we call such a background job a task.
日志面板的主要目的是向您展示集群中当前正在发生的情况。诸如创建新虚拟机之类的操作是在后台执行的,我们将此类后台作业称为任务。
Any output from such a task is saved into a separate log file. You can
view that log by simply double-click a task log entry. It is also
possible to abort a running task there.
此类任务的任何输出都会保存到单独的日志文件中。您可以通过双击任务日志条目来查看该日志。也可以在此处中止正在运行的任务。
Please note that we display the most recent tasks from all cluster nodes
here. So you can see when somebody else is working on another cluster
node in real-time.
请注意,我们在此处显示来自所有集群节点的最新任务。因此,您可以实时看到其他人在另一个集群节点上工作。
|
|
We remove older and finished task from the log panel to keep
that list short. But you can still find those tasks within the node panel in the
Task History. 我们会从日志面板中移除较旧和已完成的任务,以保持列表简洁。但您仍然可以在节点面板的任务历史中找到这些任务。 |
Some short-running actions simply send logs to all cluster
members. You can see those messages in the Cluster log panel.
一些短时间运行的操作仅将日志发送给所有集群成员。您可以在集群日志面板中看到这些消息。
4.4. Content Panels 4.4. 内容面板
When you select an item from the resource tree, the corresponding
object displays configuration and status information in the content
panel. The following sections provide a brief overview of this
functionality. Please refer to the corresponding chapters in the
reference documentation to get more detailed information.
当您从资源树中选择一个项目时,相应的对象会在内容面板中显示配置信息和状态信息。以下各节简要介绍此功能。有关更详细的信息,请参阅参考文档中的相应章节。
4.4.1. Datacenter 4.4.1. 数据中心
On the datacenter level, you can access cluster-wide settings and information.
在数据中心级别,您可以访问集群范围的设置和信息。
-
Search: perform a cluster-wide search for nodes, VMs, containers, storage devices, and pools.
搜索:执行集群范围内的搜索,查找节点、虚拟机、容器、存储设备和存储池。 -
Summary: gives a brief overview of the cluster’s health and resource usage.
摘要:简要概述集群的健康状况和资源使用情况。 -
Cluster: provides the functionality and information necessary to create or join a cluster.
集群:提供创建或加入集群所需的功能和信息。 -
Options: view and manage cluster-wide default settings.
选项:查看和管理集群范围内的默认设置。 -
Storage: provides an interface for managing cluster storage.
存储:提供管理集群存储的界面。 -
Backup: schedule backup jobs. This operates cluster wide, so it doesn’t matter where the VMs/containers are on your cluster when scheduling.
备份:安排备份任务。此操作在整个集群范围内进行,因此在安排时虚拟机/容器位于集群的哪个位置无关紧要。 -
Replication: view and manage replication jobs.
复制:查看和管理复制任务。 -
Permissions: manage user, group, and API token permissions, and LDAP, MS-AD and Two-Factor authentication.
权限:管理用户、组和 API 代币权限,以及 LDAP、MS-AD 和双因素认证。 -
HA: manage Proxmox VE High Availability.
高可用性:管理 Proxmox VE 高可用性。 -
ACME: set up ACME (Let’s Encrypt) certificates for server nodes.
ACME:为服务器节点设置 ACME(Let’s Encrypt)证书。 -
Firewall: configure and make templates for the Proxmox Firewall cluster wide.
防火墙:配置并为 Proxmox 防火墙集群创建模板。 -
Metric Server: define external metric servers for Proxmox VE.
指标服务器:为 Proxmox VE 定义外部指标服务器。 -
Notifications: configurate notification behavior and targets for Proxmox VE.
通知:配置 Proxmox VE 的通知行为和目标。 -
Support: display information about your support subscription.
支持:显示有关您的支持订阅的信息。
4.4.2. Nodes 4.4.2. 节点
The top header has useful buttons such as Reboot, Shutdown, Shell,
Bulk Actions and Help.
Shell has the options noVNC, SPICE and xterm.js.
Bulk Actions has the options Bulk Start, Bulk Shutdown and Bulk Migrate.
顶部标题栏有一些实用按钮,如重启、关机、Shell、批量操作和帮助。Shell 包含 noVNC、SPICE 和 xterm.js 选项。批量操作包含批量启动、批量关机和批量迁移选项。
-
Search: search a node for VMs, containers, storage devices, and pools.
搜索:搜索节点中的虚拟机、容器、存储设备和资源池。 -
Summary: display a brief overview of the node’s resource usage.
摘要:显示节点资源使用情况的简要概览。 -
Notes: write custom comments in Markdown syntax.
备注:使用 Markdown 语法编写自定义注释。 -
Shell: access to a shell interface for the node.
Shell:访问节点的 Shell 界面。 -
System: configure network, DNS and time settings, and access the syslog.
系统:配置网络、DNS 和时间设置,并访问系统日志。 -
Updates: upgrade the system and see the available new packages.
更新:升级系统并查看可用的新软件包。 -
Firewall: manage the Proxmox Firewall for a specific node.
防火墙:管理特定节点的 Proxmox 防火墙。 -
Disks: get an overview of the attached disks, and manage how they are used.
磁盘:查看已连接磁盘的概览,并管理它们的使用方式。 -
Ceph: is only used if you have installed a Ceph server on your host. In this case, you can manage your Ceph cluster and see the status of it here.
Ceph:仅在您在主机上安装了 Ceph 服务器时使用。在这种情况下,您可以在此管理您的 Ceph 集群并查看其状态。 -
Replication: view and manage replication jobs.
复制:查看和管理复制任务。 -
Task History: see a list of past tasks.
任务历史:查看过去任务的列表。 -
Subscription: upload a subscription key, and generate a system report for use in support cases.
订阅:上传订阅密钥,并生成系统报告以用于支持案例。
4.4.3. Guests 4.4.3. 客户机
There are two different kinds of guests and both can be converted to a template.
One of them is a Kernel-based Virtual Machine (KVM) and the other is a Linux Container (LXC).
Navigation for these are mostly the same; only some options are different.
有两种不同类型的客户机,都可以转换为模板。其中一种是基于内核的虚拟机(KVM),另一种是 Linux 容器(LXC)。它们的导航大致相同,只有一些选项不同。
To access the various guest management interfaces, select a VM or container from
the menu on the left.
要访问各种客户机管理界面,请从左侧菜单中选择一个虚拟机或容器。
The header contains commands for items such as power management, migration,
console access and type, cloning, HA, and help.
Some of these buttons contain drop-down menus, for example, Shutdown also contains
other power options, and Console contains the different console types:
SPICE, noVNC and xterm.js.
标题栏包含电源管理、迁移、控制台访问及类型、克隆、高可用性(HA)和帮助等命令。其中一些按钮包含下拉菜单,例如,关机按钮还包含其他电源选项,控制台按钮包含不同的控制台类型:SPICE、noVNC 和 xterm.js。
The panel on the right contains an interface for whatever item is selected from
the menu on the left.
右侧面板包含所选左侧菜单项的界面。
The available interfaces are as follows.
可用的界面如下。
-
Summary: provides a brief overview of the VM’s activity and a Notes field for Markdown syntax comments.
摘要:提供虚拟机活动的简要概述和用于 Markdown 语法注释的备注字段。 -
Console: access to an interactive console for the VM/container.
控制台:访问虚拟机/容器的交互式控制台。 -
(KVM)Hardware: define the hardware available to the KVM VM.
(KVM)硬件:定义可用于 KVM 虚拟机的硬件。 -
(LXC)Resources: define the system resources available to the LXC.
(LXC)资源:定义可用于 LXC 的系统资源。 -
(LXC)Network: configure a container’s network settings.
(LXC)网络:配置容器的网络设置。 -
(LXC)DNS: configure a container’s DNS settings.
(LXC)DNS:配置容器的 DNS 设置。 -
Options: manage guest options.
选项:管理客户机选项。 -
Task History: view all previous tasks related to the selected guest.
任务历史:查看与所选客户机相关的所有先前任务。 -
(KVM) Monitor: an interactive communication interface to the KVM process.
(KVM)监视器:与 KVM 进程的交互式通信接口。 -
Backup: create and restore system backups.
备份:创建和恢复系统备份。 -
Replication: view and manage the replication jobs for the selected guest.
复制:查看和管理所选客户机的复制任务。 -
Snapshots: create and restore VM snapshots.
快照:创建和恢复虚拟机快照。 -
Firewall: configure the firewall on the VM level.
防火墙:在虚拟机级别配置防火墙。 -
Permissions: manage permissions for the selected guest.
权限:管理所选客户机的权限。
4.4.4. Storage 4.4.4. 存储
As with the guest interface, the interface for storage consists of a menu on the
left for certain storage elements and an interface on the right to manage
these elements.
与客户机接口类似,存储接口由左侧的存储元素菜单和右侧用于管理这些元素的界面组成。
In this view we have a two partition split-view.
On the left side we have the storage options
and on the right side the content of the selected option will be shown.
在此视图中,我们采用了左右分割的双面板视图。左侧是存储选项,右侧显示所选选项的内容。
-
Summary: shows important information about the storage, such as the type, usage, and content which it stores.
摘要:显示有关存储的重要信息,如类型、使用情况及其存储的内容。 -
Content: a menu item for each content type which the storage stores, for example, Backups, ISO Images, CT Templates.
内容:存储所存储的每种内容类型的菜单项,例如备份、ISO 镜像、CT 模板。 -
Permissions: manage permissions for the storage.
权限:管理存储的权限。
4.4.5. Pools 4.4.5. 池
Again, the pools view comprises two partitions: a menu on the left,
and the corresponding interfaces for each menu item on the right.
同样,池视图由两个部分组成:左侧的菜单,以及右侧对应每个菜单项的界面。
-
Summary: shows a description of the pool.
摘要:显示池的描述。 -
Members: display and manage pool members (guests and storage).
成员:显示和管理池成员(客户机和存储)。 -
Permissions: manage the permissions for the pool.
权限:管理池的权限。
4.5. Tags 4.5. 标签
For organizational purposes, it is possible to set tags for guests.
Currently, these only provide informational value to users.
Tags are displayed in two places in the web interface: in the Resource Tree and
in the status line when a guest is selected.
为了便于组织管理,可以为虚拟机设置标签。目前,这些标签仅为用户提供信息价值。标签会显示在网页界面的两个位置:资源树中以及选中虚拟机时的状态栏中。
Tags can be added, edited, and removed in the status line of the guest by
clicking on the pencil icon. You can add multiple tags by pressing the +
button and remove them by pressing the - button. To save or cancel the changes,
you can use the ✓ and x button respectively.
可以通过点击虚拟机状态栏中的铅笔图标来添加、编辑和删除标签。按下 + 按钮可以添加多个标签,按下 - 按钮可以删除标签。要保存或取消更改,可以分别使用 ✓ 和 x 按钮。
Tags can also be set via the CLI, where multiple tags are separated by semicolons.
For example:
标签也可以通过命令行界面(CLI)设置,多个标签之间用分号分隔。例如:
# qm set ID --tags myfirsttag;mysecondtag
4.5.1. Style Configuration
4.5.1. 样式配置
By default, the tag colors are derived from their text in a deterministic way.
The color, shape in the resource tree, and case-sensitivity, as well as how tags
are sorted, can be customized. This can be done via the web interface under
Datacenter → Options → Tag Style Override. Alternatively, this can be done
via the CLI. For example:
默认情况下,标签颜色是根据其文本以确定性方式派生的。资源树中的颜色、形状以及大小写敏感性,和标签的排序方式,都可以自定义。可以通过网页界面在数据中心 → 选项 → 标签样式覆盖中进行设置。或者,也可以通过命令行界面完成。例如:
# pvesh set /cluster/options --tag-style color-map=example:000000:FFFFFF
sets the background color of the tag example to black (#000000) and the text
color to white (#FFFFFF).
将标签 example 的背景色设置为黑色(#000000),文本颜色设置为白色(#FFFFFF)。
4.5.2. Permissions 4.5.2. 权限
By default, users with the privilege VM.Config.Options on a guest (/vms/ID)
can set any tags they want (see
Permission Management). If you want to
restrict this behavior, appropriate permissions can be set under
Datacenter → Options → User Tag Access:
默认情况下,拥有虚拟机配置选项(VM.Config.Options)权限的用户可以在某个虚拟机(/vms/ID)上设置任意标签(参见权限管理)。如果想限制此行为,可以在数据中心 → 选项 → 用户标签访问中设置相应权限:
-
free: users are not restricted in setting tags (Default)
free:用户在设置标签时不受限制(默认) -
list: users can set tags based on a predefined list of tags
list:用户可以根据预定义的标签列表设置标签 -
existing: like list but users can also use already existing tags
existing:类似于 list,但用户也可以使用已存在的标签 -
none: users are restricted from using tags
none:用户被限制使用标签
The same can also be done via the CLI.
同样的操作也可以通过命令行界面(CLI)完成。
Note that a user with the Sys.Modify privileges on / is always able to set
or delete any tags, regardless of the settings here. Additionally, there is a
configurable list of registered tags which can only be added and removed by
users with the privilege Sys.Modify on /. The list of registered tags can be
edited under Datacenter → Options → Registered Tags or via the CLI.
请注意,拥有根目录(/)Sys.Modify 权限的用户始终可以设置或删除任何标签,而不受此处设置的限制。此外,有一个可配置的注册标签列表,只有拥有根目录(/)Sys.Modify 权限的用户才能添加和删除。注册标签列表可以在数据中心 → 选项 → 注册标签中编辑,或者通过命令行界面进行管理。
For more details on the exact options and how to invoke them in the CLI, see
Datacenter Configuration.
有关具体选项及如何在命令行中调用它们的更多详细信息,请参见数据中心配置。
4.6. Consent Banner 4.6. 同意横幅
A custom consent banner that has to be accepted before login can be configured
in Datacenter → Options → Consent Text. If there is no
content, the consent banner will not be displayed. The text will be stored as a
base64 string in the /etc/pve/datacenter.cfg config file.
可以在数据中心 → 选项 → 同意文本中配置一个自定义的同意横幅,用户必须在登录前接受该横幅。如果没有内容,则不会显示同意横幅。文本将以 base64 字符串的形式存储在 /etc/pve/datacenter.cfg 配置文件中。
5. Cluster Manager
5. 集群管理器
The Proxmox VE cluster manager pvecm is a tool to create a group of
physical servers. Such a group is called a cluster. We use the
Corosync Cluster Engine for reliable group
communication. There’s no explicit limit for the number of nodes in a cluster.
In practice, the actual possible node count may be limited by the host and
network performance. Currently (2021), there are reports of clusters (using
high-end enterprise hardware) with over 50 nodes in production.
Proxmox VE 集群管理器 pvecm 是一个用于创建物理服务器组的工具。这样的组称为集群。我们使用 Corosync 集群引擎来实现可靠的组通信。集群中节点的数量没有明确限制。实际上,节点数量可能受主机和网络性能的限制。目前(2021 年),已有使用高端企业硬件的集群在生产环境中超过 50 个节点的报告。
pvecm can be used to create a new cluster, join nodes to a cluster,
leave the cluster, get status information, and do various other cluster-related
tasks. The Proxmox Cluster File System (“pmxcfs”)
is used to transparently distribute the cluster configuration to all cluster
nodes.
pvecm 可用于创建新集群、将节点加入集群、离开集群、获取状态信息以及执行各种其他与集群相关的任务。Proxmox 集群文件系统(“pmxcfs”)用于透明地将集群配置分发到所有集群节点。
Grouping nodes into a cluster has the following advantages:
将节点分组到集群中具有以下优点:
-
Centralized, web-based management
集中式的基于网页的管理 -
Multi-master clusters: each node can do all management tasks
多主集群:每个节点都可以执行所有管理任务 -
Use of pmxcfs, a database-driven file system, for storing configuration files, replicated in real-time on all nodes using corosync
使用 pmxcfs,一种数据库驱动的文件系统,用于存储配置文件,并通过 corosync 实时复制到所有节点 -
Easy migration of virtual machines and containers between physical hosts
虚拟机和容器在物理主机之间的轻松迁移 -
Fast deployment 快速部署
-
Cluster-wide services like firewall and HA
集群范围的服务,如防火墙和高可用性
5.1. Requirements 5.1. 要求
-
All nodes must be able to connect to each other via UDP ports 5405-5412 for corosync to work.
所有节点必须能够通过 UDP 端口 5405-5412 相互连接,以使 corosync 正常工作。 -
Date and time must be synchronized.
日期和时间必须同步。 -
An SSH tunnel on TCP port 22 between nodes is required.
节点之间需要通过 TCP 端口 22 建立 SSH 隧道。 -
If you are interested in High Availability, you need to have at least three nodes for reliable quorum. All nodes should have the same version.
如果您对高可用性感兴趣,至少需要三个节点以确保可靠的仲裁。所有节点应使用相同版本。 -
We recommend a dedicated NIC for the cluster traffic, especially if you use shared storage.
我们建议为集群流量使用专用的网卡,尤其是在使用共享存储时。 -
The root password of a cluster node is required for adding nodes.
添加节点时需要集群节点的 root 密码。 -
Online migration of virtual machines is only supported when nodes have CPUs from the same vendor. It might work otherwise, but this is never guaranteed.
只有当节点的 CPU 来自同一供应商时,才支持虚拟机的在线迁移。其他情况下可能可行,但绝不保证。
|
|
It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.X cluster
nodes. Proxmox VE 3.x 及更早版本的节点无法与 Proxmox VE 4.X 的集群节点混用。 |
|
|
While it’s possible to mix Proxmox VE 4.4 and Proxmox VE 5.0 nodes, doing so is
not supported as a production configuration and should only be done temporarily,
during an upgrade of the whole cluster from one major version to another. 虽然可以混合使用 Proxmox VE 4.4 和 Proxmox VE 5.0 节点,但这种做法不被支持作为生产环境配置,只应在整个集群从一个主版本升级到另一个主版本的过程中临时进行。 |
|
|
Running a cluster of Proxmox VE 6.x with earlier versions is not possible. The
cluster protocol (corosync) between Proxmox VE 6.x and earlier versions changed
fundamentally. The corosync 3 packages for Proxmox VE 5.4 are only intended for the
upgrade procedure to Proxmox VE 6.0. Proxmox VE 6.x 与早期版本的集群无法共存。Proxmox VE 6.x 与早期版本之间的集群协议(corosync)发生了根本性变化。Proxmox VE 5.4 的 corosync 3 软件包仅用于升级到 Proxmox VE 6.0 的过程。 |
5.2. Preparing Nodes 5.2. 准备节点
First, install Proxmox VE on all nodes. Make sure that each node is
installed with the final hostname and IP configuration. Changing the
hostname and IP is not possible after cluster creation.
首先,在所有节点上安装 Proxmox VE。确保每个节点安装时使用最终的主机名和 IP 配置。集群创建后,无法更改主机名和 IP。
While it’s common to reference all node names and their IPs in /etc/hosts (or
make their names resolvable through other means), this is not necessary for a
cluster to work. It may be useful however, as you can then connect from one node
to another via SSH, using the easier to remember node name (see also
Link Address Types). Note that we always
recommend referencing nodes by their IP addresses in the cluster configuration.
虽然通常会在 /etc/hosts 中引用所有节点名称及其 IP(或通过其他方式使其名称可解析),但这对于集群的正常运行并非必要。不过,这样做可能会有用,因为你可以通过更易记的节点名称通过 SSH 从一个节点连接到另一个节点(另见链接地址类型)。请注意,我们始终建议在集群配置中通过 IP 地址引用节点。
5.3. Create a Cluster
5.3. 创建集群
You can either create a cluster on the console (login via ssh), or through
the API using the Proxmox VE web interface (Datacenter → Cluster).
你可以在控制台(通过 ssh 登录)上创建集群,也可以通过 Proxmox VE 网页界面(数据中心 → 集群)使用 API 创建集群。
|
|
Use a unique name for your cluster. This name cannot be changed later.
The cluster name follows the same rules as node names. 为你的集群使用一个唯一的名称。该名称以后无法更改。集群名称遵循与节点名称相同的规则。 |
5.3.1. Create via Web GUI
5.3.1. 通过网页图形界面创建
Under Datacenter → Cluster, click on Create Cluster. Enter the cluster
name and select a network connection from the drop-down list to serve as the
main cluster network (Link 0). It defaults to the IP resolved via the node’s
hostname.
在数据中心 → 集群中,点击创建集群。输入集群名称,并从下拉列表中选择一个网络连接作为主集群网络(Link 0)。默认使用通过节点主机名解析的 IP 地址。
As of Proxmox VE 6.2, up to 8 fallback links can be added to a cluster. To add a
redundant link, click the Add button and select a link number and IP address
from the respective fields. Prior to Proxmox VE 6.2, to add a second link as
fallback, you can select the Advanced checkbox and choose an additional
network interface (Link 1, see also Corosync Redundancy).
从 Proxmox VE 6.2 开始,最多可以向集群添加 8 个备用链路。要添加冗余链路,点击添加按钮,并从相应字段中选择链路编号和 IP 地址。在 Proxmox VE 6.2 之前,若要添加第二条备用链路,可以勾选高级复选框并选择额外的网络接口(Link 1,参见 Corosync 冗余)。
|
|
Ensure that the network selected for cluster communication is not used for
any high traffic purposes, like network storage or live-migration.
While the cluster network itself produces small amounts of data, it is very
sensitive to latency. Check out full
cluster network requirements. 确保为集群通信选择的网络不用于任何高流量用途,如网络存储或实时迁移。虽然集群网络本身产生的数据量很小,但对延迟非常敏感。请查看完整的集群网络要求。 |
5.3.2. Create via the Command Line
5.3.2. 通过命令行创建
Login via ssh to the first Proxmox VE node and run the following command:
通过 ssh 登录到第一个 Proxmox VE 节点并运行以下命令:
hp1# pvecm create CLUSTERNAME
To check the state of the new cluster use:
要检查新集群的状态,请使用:
hp1# pvecm status
5.3.3. Multiple Clusters in the Same Network
5.3.3. 同一网络中的多个集群
It is possible to create multiple clusters in the same physical or logical
network. In this case, each cluster must have a unique name to avoid possible
clashes in the cluster communication stack. Furthermore, this helps avoid human
confusion by making clusters clearly distinguishable.
可以在同一物理或逻辑网络中创建多个集群。在这种情况下,每个集群必须有一个唯一的名称,以避免集群通信栈中可能发生的冲突。此外,这有助于通过使集群明显区分开来,避免人为混淆。
While the bandwidth requirement of a corosync cluster is relatively low, the
latency of packages and the package per second (PPS) rate is the limiting
factor. Different clusters in the same network can compete with each other for
these resources, so it may still make sense to use separate physical network
infrastructure for bigger clusters.
虽然 corosync 集群的带宽需求相对较低,但数据包的延迟和每秒数据包数(PPS)是限制因素。同一网络中的不同集群可能会相互竞争这些资源,因此对于较大的集群,使用独立的物理网络基础设施仍然是合理的。
5.4. Adding Nodes to the Cluster
5.4. 向集群添加节点
|
|
All existing configuration in /etc/pve is overwritten when joining a
cluster. In particular, a joining node cannot hold any guests, since guest IDs
could otherwise conflict, and the node will inherit the cluster’s storage
configuration. To join a node with existing guest, as a workaround, you can
create a backup of each guest (using vzdump) and restore it under a different
ID after joining. If the node’s storage layout differs, you will need to re-add
the node’s storages, and adapt each storage’s node restriction to reflect on
which nodes the storage is actually available. 加入集群时,/etc/pve 中的所有现有配置都会被覆盖。特别是,加入的节点不能包含任何虚拟机,因为虚拟机 ID 可能会冲突,并且该节点将继承集群的存储配置。作为变通方法,如果要加入已有虚拟机的节点,可以使用 vzdump 创建每个虚拟机的备份,并在加入后以不同的 ID 恢复它们。如果节点的存储布局不同,则需要重新添加节点的存储,并调整每个存储的节点限制,以反映该存储实际可用的节点。 |
5.4.1. Join Node to Cluster via GUI
5.4.1. 通过 GUI 将节点加入集群
Log in to the web interface on an existing cluster node. Under Datacenter →
Cluster, click the Join Information button at the top. Then, click on the
button Copy Information. Alternatively, copy the string from the Information
field manually.
登录到现有集群节点的网页界面。在“数据中心”→“集群”下,点击顶部的“加入信息”按钮。然后,点击“复制信息”按钮。或者,也可以手动复制“信息”字段中的字符串。
Next, log in to the web interface on the node you want to add.
Under Datacenter → Cluster, click on Join Cluster. Fill in the
Information field with the Join Information text you copied earlier.
Most settings required for joining the cluster will be filled out
automatically. For security reasons, the cluster password has to be entered
manually.
接下来,登录到您想要添加的节点的网页界面。在“数据中心 → 集群”下,点击“加入集群”。在“信息”字段中填写您之前复制的加入信息文本。加入集群所需的大部分设置将自动填写。出于安全原因,集群密码必须手动输入。
|
|
To enter all required data manually, you can disable the Assisted Join
checkbox. 如果想手动输入所有必需的数据,可以取消选中“辅助加入”复选框。 |
After clicking the Join button, the cluster join process will start
immediately. After the node has joined the cluster, its current node certificate
will be replaced by one signed from the cluster certificate authority (CA).
This means that the current session will stop working after a few seconds. You
then might need to force-reload the web interface and log in again with the
cluster credentials.
点击“加入”按钮后,集群加入过程将立即开始。节点加入集群后,其当前节点证书将被集群证书颁发机构(CA)签发的新证书替换。这意味着当前会话将在几秒钟后停止工作。您可能需要强制重新加载网页界面,并使用集群凭据重新登录。
Now your node should be visible under Datacenter → Cluster.
现在,您的节点应该可以在“数据中心 → 集群”下看到。
5.4.2. Join Node to Cluster via Command Line
5.4.2. 通过命令行将节点加入集群
Log in to the node you want to join into an existing cluster via ssh.
通过 ssh 登录到您想要加入现有集群的节点。
# pvecm add IP-ADDRESS-CLUSTER
For IP-ADDRESS-CLUSTER, use the IP or hostname of an existing cluster node.
An IP address is recommended (see Link Address Types).
对于 IP-ADDRESS-CLUSTER,使用现有集群节点的 IP 地址或主机名。建议使用 IP 地址(参见链接地址类型)。
To check the state of the cluster use:
要检查集群状态,请使用:
# pvecm status
添加 4 个节点后的集群状态
# pvecm status
Cluster information
~~~~~~~~~~~~~~~~~~~
Name: prod-central
Config Version: 3
Transport: knet
Secure auth: on
Quorum information
~~~~~~~~~~~~~~~~~~
Date: Tue Sep 14 11:06:47 2021
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000001
Ring ID: 1.1a8
Quorate: Yes
Votequorum information
~~~~~~~~~~~~~~~~~~~~~~
Expected votes: 4
Highest expected: 4
Total votes: 4
Quorum: 3
Flags: Quorate
Membership information
~~~~~~~~~~~~~~~~~~~~~~
Nodeid Votes Name
0x00000001 1 192.168.15.91
0x00000002 1 192.168.15.92 (local)
0x00000003 1 192.168.15.93
0x00000004 1 192.168.15.94
If you only want a list of all nodes, use:
如果您只想列出所有节点,请使用:
# pvecm nodes
列出集群中的节点
# pvecm nodes
Membership information
~~~~~~~~~~~~~~~~~~~~~~
Nodeid Votes Name
1 1 hp1
2 1 hp2 (local)
3 1 hp3
4 1 hp4
5.4.3. Adding Nodes with Separated Cluster Network
5.4.3. 使用独立集群网络添加节点
When adding a node to a cluster with a separated cluster network, you need to
use the link0 parameter to set the nodes address on that network:
当向具有独立集群网络的集群添加节点时,您需要使用 link0 参数来设置该节点在该网络上的地址:
# pvecm add IP-ADDRESS-CLUSTER --link0 LOCAL-IP-ADDRESS-LINK0If you want to use the built-in redundancy of the
Kronosnet transport layer, also use the link1 parameter.
如果您想使用 Kronosnet 传输层内置的冗余功能,还需使用 link1 参数。
Using the GUI, you can select the correct interface from the corresponding
Link X fields in the Cluster Join dialog.
使用图形界面时,您可以在“集群加入”对话框中的相应 Link X 字段中选择正确的接口。
5.5. Remove a Cluster Node
5.5. 移除集群节点
|
|
Read the procedure carefully before proceeding, as it may
not be what you want or need. 在继续操作之前,请仔细阅读此步骤,因为它可能不是您想要或需要的。 |
Move all virtual machines from the node. Ensure that you have made copies of any
local data or backups that you want to keep. In addition, make sure to remove
any scheduled replication jobs to the node to be removed.
将所有虚拟机从该节点迁移出去。确保您已备份所有本地数据或想要保留的备份。此外,务必删除所有指向将被移除节点的计划复制任务。
|
|
Failure to remove replication jobs to a node before removing said node
will result in the replication job becoming irremovable. Especially note that
replication automatically switches direction if a replicated VM is migrated, so
by migrating a replicated VM from a node to be deleted, replication jobs will be
set up to that node automatically. 如果在移除节点之前未删除指向该节点的复制任务,复制任务将变得无法删除。特别需要注意的是,如果迁移了被复制的虚拟机,复制方向会自动切换,因此通过将被复制的虚拟机从将被删除的节点迁移出去,复制任务会自动设置到该节点。 |
If the node to be removed has been configured for
Ceph:
如果要移除的节点已配置 Ceph:
-
Ensure that sufficient Proxmox VE nodes with running OSDs (up and in) continue to exist.
确保有足够的运行中且状态正常的 Proxmox VE 节点存在。By default, Ceph pools have a size/min_size of 3/2 and a full node as failure domain at the object balancer CRUSH. So if less than size (3) nodes with running OSDs are online, data redundancy will be degraded. If less than min_size are online, pool I/O will be blocked and affected guests may crash.
默认情况下,Ceph 池的 size/min_size 为 3/2,且在对象均衡器 CRUSH 中以全节点作为故障域。因此,如果在线的运行中 OSD 节点少于 size(3)个,数据冗余将被降低。如果在线的运行中 OSD 节点少于 min_size,池的 I/O 将被阻塞,受影响的虚拟机可能会崩溃。 -
Ensure that sufficient monitors, managers and, if using CephFS, metadata servers remain available.
确保有足够的监视器、管理器以及(如果使用 CephFS)元数据服务器保持可用。 -
To maintain data redundancy, each destruction of an OSD, especially the last one on a node, will trigger a data rebalance. Therefore, ensure that the OSDs on the remaining nodes have sufficient free space left.
为了维护数据冗余,每次销毁 OSD,尤其是节点上的最后一个 OSD,都会触发数据重新平衡。因此,确保剩余节点上的 OSD 有足够的剩余空间。 -
To remove Ceph from the node to be deleted, start by destroying its OSDs, one after the other.
要从要删除的节点中移除 Ceph,首先逐个销毁其 OSD。 -
Once the CEPH status is HEALTH_OK again, proceed by:
一旦 CEPH 状态再次显示为 HEALTH_OK,继续进行:-
destroying its metadata server via web interface at Ceph → CephFS or by running:
通过 Web 界面在 Ceph → CephFS 中销毁其元数据服务器,或运行以下命令:# pveceph mds destroy <local hostname>
-
-
Finally, remove the now empty bucket (Proxmox VE node to be removed) from the CRUSH hierarchy by running:
最后,通过运行以下命令从 CRUSH 层次结构中移除现在已空的桶(要移除的 Proxmox VE 节点):# ceph osd crush remove <hostname>
In the following example, we will remove the node hp4 from the cluster.
在以下示例中,我们将从集群中移除节点 hp4。
Log in to a different cluster node (not hp4), and issue a pvecm nodes
command to identify the node ID to remove:
登录到另一个集群节点(不是 hp4),并执行 pvecm nodes 命令以确定要移除的节点 ID:
hp1# pvecm nodes
Membership information
~~~~~~~~~~~~~~~~~~~~~~
Nodeid Votes Name
1 1 hp1 (local)
2 1 hp2
3 1 hp3
4 1 hp4
At this point, you must power off hp4 and ensure that it will not power on
again (in the network) with its current configuration.
此时,您必须关闭 hp4 并确保其当前配置下不会再次开机(在网络中)。
|
|
As mentioned above, it is critical to power off the node
before removal, and make sure that it will not power on again
(in the existing cluster network) with its current configuration.
If you power on the node as it is, the cluster could end up broken,
and it could be difficult to restore it to a functioning state. 如上所述,移除节点前务必关闭该节点,并确保其当前配置下不会在现有集群网络中再次开机。如果按原样开机,集群可能会损坏,且恢复到正常状态可能会很困难。 |
After powering off the node hp4, we can safely remove it from the cluster.
关闭节点 hp4 后,我们可以安全地将其从集群中移除。
hp1# pvecm delnode hp4 Killing node 4
|
|
At this point, it is possible that you will receive an error message
stating Could not kill node (error = CS_ERR_NOT_EXIST). This does not
signify an actual failure in the deletion of the node, but rather a failure in
corosync trying to kill an offline node. Thus, it can be safely ignored. 此时,您可能会收到一条错误信息,提示无法终止节点(错误 = CS_ERR_NOT_EXIST)。这并不表示节点删除失败,而是 corosync 尝试终止一个离线节点失败。因此,可以安全忽略该错误。 |
Use pvecm nodes or pvecm status to check the node list again. It should
look something like:
使用 pvecm nodes 或 pvecm status 再次检查节点列表。它应该看起来像这样:
hp1# pvecm status
...
Votequorum information
~~~~~~~~~~~~~~~~~~~~~~
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate
Membership information
~~~~~~~~~~~~~~~~~~~~~~
Nodeid Votes Name
0x00000001 1 192.168.15.90 (local)
0x00000002 1 192.168.15.91
0x00000003 1 192.168.15.92
If, for whatever reason, you want this server to join the same cluster again,
you have to:
如果出于某种原因,您想让这台服务器重新加入同一个集群,您必须:
-
do a fresh install of Proxmox VE on it,
对其进行全新安装 Proxmox VE, -
then join it, as explained in the previous section.
然后按照上一节所述加入集群。
The configuration files for the removed node will still reside in
/etc/pve/nodes/hp4. Recover any configuration you still need and remove the
directory afterwards.
被移除节点的配置文件仍将保留在 /etc/pve/nodes/hp4 中。请恢复您仍需要的任何配置,然后删除该目录。
|
|
After removal of the node, its SSH fingerprint will still reside in the
known_hosts of the other nodes. If you receive an SSH error after rejoining
a node with the same IP or hostname, run pvecm updatecerts once on the
re-added node to update its fingerprint cluster wide. 节点移除后,其 SSH 指纹仍会保留在其他节点的 known_hosts 文件中。如果在使用相同 IP 或主机名重新加入节点后收到 SSH 错误,请在重新添加的节点上运行一次 pvecm updatecerts,以在整个集群中更新其指纹。 |
5.5.1. Separate a Node Without Reinstalling
5.5.1. 在不重新安装的情况下分离节点
|
|
This is not the recommended method, proceed with caution. Use the
previous method if you’re unsure. 这不是推荐的方法,请谨慎操作。如果不确定,请使用前面的方法。 |
You can also separate a node from a cluster without reinstalling it from
scratch. But after removing the node from the cluster, it will still have
access to any shared storage. This must be resolved before you start removing
the node from the cluster. A Proxmox VE cluster cannot share the exact same
storage with another cluster, as storage locking doesn’t work over the cluster
boundary. Furthermore, it may also lead to VMID conflicts.
您也可以在不重新安装的情况下将节点从集群中分离。但在将节点从集群中移除后,它仍然可以访问任何共享存储。在开始移除节点之前,必须解决这个问题。Proxmox VE 集群不能与另一个集群共享完全相同的存储,因为存储锁定无法跨集群边界工作。此外,这还可能导致 VMID 冲突。
It’s suggested that you create a new storage, where only the node which you want
to separate has access. This can be a new export on your NFS or a new Ceph
pool, to name a few examples. It’s just important that the exact same storage
does not get accessed by multiple clusters. After setting up this storage, move
all data and VMs from the node to it. Then you are ready to separate the
node from the cluster.
建议您创建一个新的存储,只有您想要分离的节点可以访问。这可以是您的 NFS 上的新导出,或者是一个新的 Ceph 池,仅举几个例子。关键是确保完全相同的存储不会被多个集群访问。设置好此存储后,将节点上的所有数据和虚拟机迁移到该存储。然后您就可以准备将节点从集群中分离。
|
|
Ensure that all shared resources are cleanly separated! Otherwise you
will run into conflicts and problems. 确保所有共享资源都已被干净地分离!否则您将遇到冲突和问题。 |
First, stop the corosync and pve-cluster services on the node:
首先,停止该节点上的 corosync 和 pve-cluster 服务:
systemctl stop pve-cluster systemctl stop corosync
Start the cluster file system again in local mode:
以本地模式重新启动集群文件系统:
pmxcfs -l
Delete the corosync configuration files:
删除 corosync 配置文件:
rm /etc/pve/corosync.conf rm -r /etc/corosync/*
You can now start the file system again as a normal service:
现在您可以将文件系统作为普通服务重新启动:
killall pmxcfs systemctl start pve-cluster
The node is now separated from the cluster. You can deleted it from any
remaining node of the cluster with:
该节点现已与集群分离。您可以在集群中任何剩余的节点上使用以下命令将其删除:
pvecm delnode oldnode
If the command fails due to a loss of quorum in the remaining node, you can set
the expected votes to 1 as a workaround:
如果命令因剩余节点失去法定人数而失败,您可以将预期投票数设置为 1 作为解决方法:
pvecm expected 1And then repeat the pvecm delnode command.
然后重复执行 pvecm delnode 命令。
Now switch back to the separated node and delete all the remaining cluster
files on it. This ensures that the node can be added to another cluster again
without problems.
现在切换回被分离的节点,删除其上的所有剩余集群文件。这确保该节点可以无问题地再次添加到另一个集群中。
rm /var/lib/corosync/*As the configuration files from the other nodes are still in the cluster
file system, you may want to clean those up too. After making absolutely sure
that you have the correct node name, you can simply remove the entire
directory recursively from /etc/pve/nodes/NODENAME.
由于其他节点的配置文件仍保存在集群文件系统中,您可能也想清理这些文件。在完全确认节点名称无误后,您可以直接递归删除 /etc/pve/nodes/NODENAME 目录。
|
|
The node’s SSH keys will remain in the authorized_key file. This
means that the nodes can still connect to each other with public key
authentication. You should fix this by removing the respective keys from the
/etc/pve/priv/authorized_keys file. 节点的 SSH 密钥将保留在 authorized_key 文件中。这意味着节点仍然可以通过公钥认证相互连接。你应该通过从 /etc/pve/priv/authorized_keys 文件中删除相应的密钥来解决此问题。 |
5.6. Quorum 5.6. 仲裁
Proxmox VE use a quorum-based technique to provide a consistent state among
all cluster nodes.
Proxmox VE 使用基于仲裁的技术来在所有集群节点之间提供一致的状态。
A quorum is the minimum number of votes that a distributed transaction
has to obtain in order to be allowed to perform an operation in a
distributed system.
仲裁是分布式系统中分布式事务必须获得的最小投票数,以允许执行操作。
法定人数(分布式计算)
— from Wikipedia
— 来自维基百科
In case of network partitioning, state changes requires that a
majority of nodes are online. The cluster switches to read-only mode
if it loses quorum.
在网络分区的情况下,状态更改需要大多数节点在线。如果集群失去法定人数,则切换到只读模式。
|
|
Proxmox VE assigns a single vote to each node by default. Proxmox VE 默认为每个节点分配一个投票权。 |
5.7. Cluster Network 5.7. 集群网络
The cluster network is the core of a cluster. All messages sent over it have to
be delivered reliably to all nodes in their respective order. In Proxmox VE this
part is done by corosync, an implementation of a high performance, low overhead,
high availability development toolkit. It serves our decentralized configuration
file system (pmxcfs).
集群网络是集群的核心。所有通过它发送的消息必须可靠地按顺序传递到各自的节点。在 Proxmox VE 中,这部分由 corosync 完成,corosync 是一个高性能、低开销、高可用性开发工具包的实现。它为我们的去中心化配置文件系统(pmxcfs)提供服务。
5.7.1. Network Requirements
5.7.1. 网络要求
The Proxmox VE cluster stack requires a reliable network with latencies under 5
milliseconds (LAN performance) between all nodes to operate stably. While on
setups with a small node count a network with higher latencies may work, this
is not guaranteed and gets rather unlikely with more than three nodes and
latencies above around 10 ms.
Proxmox VE 集群堆栈需要一个可靠的网络,所有节点之间的延迟低于 5 毫秒(局域网性能),以保证稳定运行。虽然在节点数量较少的设置中,延迟较高的网络可能也能工作,但这并不保证,且当节点超过三个且延迟超过约 10 毫秒时,这种可能性会大大降低。
The network should not be used heavily by other members, as while corosync does
not uses much bandwidth it is sensitive to latency jitters; ideally corosync
runs on its own physically separated network. Especially do not use a shared
network for corosync and storage (except as a potential low-priority fallback
in a redundant configuration).
网络不应被其他成员大量使用,因为虽然 corosync 不占用太多带宽,但它对延迟抖动非常敏感;理想情况下,corosync 应运行在其独立的物理隔离网络上。尤其不要将 corosync 和存储共用同一网络(除非在冗余配置中作为潜在的低优先级备用网络)。
Before setting up a cluster, it is good practice to check if the network is fit
for that purpose. To ensure that the nodes can connect to each other on the
cluster network, you can test the connectivity between them with the ping
tool.
在设置集群之前,最好先检查网络是否适合该用途。为了确保节点之间能够通过集群网络互相连接,可以使用 ping 工具测试它们之间的连通性。
If the Proxmox VE firewall is enabled, ACCEPT rules for corosync will automatically
be generated - no manual action is required.
如果启用了 Proxmox VE 防火墙,corosync 的 ACCEPT 规则将自动生成——无需手动操作。
|
|
Corosync used Multicast before version 3.0 (introduced in Proxmox VE 6.0).
Modern versions rely on Kronosnet for cluster
communication, which, for now, only supports regular UDP unicast. corosync 在 3.0 版本之前(Proxmox VE 6.0 引入)使用多播。现代版本依赖 Kronosnet 进行集群通信,目前仅支持常规的 UDP 单播。 |
|
|
You can still enable Multicast or legacy unicast by setting your
transport to udp or udpu in your corosync.conf,
but keep in mind that this will disable all cryptography and redundancy support.
This is therefore not recommended. 您仍然可以通过在 corosync.conf 中将传输设置为 udp 或 udpu 来启用多播或传统单播,但请注意,这将禁用所有加密和冗余支持。因此,不推荐这样做。 |
5.7.2. Separate Cluster Network
5.7.2. 独立的集群网络
When creating a cluster without any parameters, the corosync cluster network is
generally shared with the web interface and the VMs' network. Depending on
your setup, even storage traffic may get sent over the same network. It’s
recommended to change that, as corosync is a time-critical, real-time
application.
在创建集群时如果不带任何参数,corosync 集群网络通常会与 Web 界面和虚拟机的网络共享。根据您的设置,甚至存储流量也可能通过同一网络传输。建议更改此设置,因为 corosync 是一个时间敏感的实时应用。
Setting Up a New Network
设置新网络
First, you have to set up a new network interface. It should be on a physically
separate network. Ensure that your network fulfills the
cluster network requirements.
首先,您需要设置一个新的网络接口。它应该位于一个物理上独立的网络上。确保您的网络满足集群网络的要求。
Separate On Cluster Creation
在创建集群时分离
This is possible via the linkX parameters of the pvecm create
command, used for creating a new cluster.
这可以通过 pvecm create 命令的 linkX 参数实现,该命令用于创建新集群。
If you have set up an additional NIC with a static address on 10.10.10.1/25,
and want to send and receive all cluster communication over this interface,
you would execute:
如果您已经设置了一个附加的网卡,静态地址为 10.10.10.1/25,并且希望通过该接口发送和接收所有集群通信,您可以执行:
pvecm create test --link0 10.10.10.1
To check if everything is working properly, execute:
要检查一切是否正常工作,请执行:
systemctl status corosync
Afterwards, proceed as described above to
add nodes with a separated cluster network.
之后,按照上述描述继续添加具有独立集群网络的节点。
Separate After Cluster Creation
集群创建后分离
You can do this if you have already created a cluster and want to switch
its communication to another network, without rebuilding the whole cluster.
This change may lead to short periods of quorum loss in the cluster, as nodes
have to restart corosync and come up one after the other on the new network.
如果您已经创建了集群,并且想将其通信切换到另一个网络,而不重建整个集群,可以这样做。此更改可能会导致集群中出现短暂的法定人数丢失,因为节点必须重新启动 corosync,并在新网络上依次启动。
Check how to edit the corosync.conf file first.
Then, open it and you should see a file similar to:
首先查看如何编辑 corosync.conf 文件。然后打开它,你应该会看到类似以下内容的文件:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: due
nodeid: 2
quorum_votes: 1
ring0_addr: due
}
node {
name: tre
nodeid: 3
quorum_votes: 1
ring0_addr: tre
}
node {
name: uno
nodeid: 1
quorum_votes: 1
ring0_addr: uno
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: testcluster
config_version: 3
ip_version: ipv4-6
secauth: on
version: 2
interface {
linknumber: 0
}
}
|
|
ringX_addr actually specifies a corosync link address. The name "ring"
is a remnant of older corosync versions that is kept for backwards
compatibility. ringX_addr 实际上指定了 corosync 链路地址。名称“ring”是旧版本 corosync 的遗留名称,为了向后兼容而保留。 |
The first thing you want to do is add the name properties in the node entries,
if you do not see them already. Those must match the node name.
你首先要做的是在节点条目中添加 name 属性,如果你还没有看到它们。它们必须与节点名称匹配。
Then replace all addresses from the ring0_addr properties of all nodes with
the new addresses. You may use plain IP addresses or hostnames here. If you use
hostnames, ensure that they are resolvable from all nodes (see also
Link Address Types).
然后将所有节点的 ring0_addr 属性中的地址替换为新的地址。这里可以使用普通的 IP 地址或主机名。如果使用主机名,确保所有节点都能解析它们(另见链路地址类型)。
In this example, we want to switch cluster communication to the
10.10.10.0/25 network, so we change the ring0_addr of each node respectively.
在此示例中,我们希望将集群通信切换到 10.10.10.0/25 网络,因此分别更改每个节点的 ring0_addr。
|
|
The exact same procedure can be used to change other ringX_addr values
as well. However, we recommend only changing one link address at a time, so
that it’s easier to recover if something goes wrong. 完全相同的步骤也可以用来更改其他 ringX_addr 的值。不过,我们建议一次只更改一个链路地址,这样如果出现问题,更容易恢复。 |
After we increase the config_version property, the new configuration file
should look like:
在我们增加 config_version 属性后,新的配置文件应如下所示:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: due
nodeid: 2
quorum_votes: 1
ring0_addr: 10.10.10.2
}
node {
name: tre
nodeid: 3
quorum_votes: 1
ring0_addr: 10.10.10.3
}
node {
name: uno
nodeid: 1
quorum_votes: 1
ring0_addr: 10.10.10.1
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: testcluster
config_version: 4
ip_version: ipv4-6
secauth: on
version: 2
interface {
linknumber: 0
}
}
Then, after a final check to see that all changed information is correct, we
save it and once again follow the
edit corosync.conf file section to bring it into
effect.
然后,在最后检查所有更改的信息是否正确后,保存文件,并再次按照编辑 corosync.conf 文件部分的步骤使其生效。
The changes will be applied live, so restarting corosync is not strictly
necessary. If you changed other settings as well, or notice corosync
complaining, you can optionally trigger a restart.
更改将实时应用,因此不一定非要重启 corosync。如果你也更改了其他设置,或者注意到 corosync 出现警告,可以选择触发重启。
On a single node execute:
在单个节点上执行:
systemctl restart corosync
Now check if everything is okay:
现在检查一切是否正常:
systemctl status corosync
If corosync begins to work again, restart it on all other nodes too.
They will then join the cluster membership one by one on the new network.
如果 corosync 重新开始工作,也请在所有其他节点上重启它。它们随后将依次通过新网络加入集群成员。
5.7.3. Corosync Addresses
5.7.3. Corosync 地址
A corosync link address (for backwards compatibility denoted by ringX_addr in
corosync.conf) can be specified in two ways:
corosync 链路地址(为了向后兼容,在 corosync.conf 中用 ringX_addr 表示)可以通过两种方式指定:
-
IPv4/v6 addresses can be used directly. They are recommended, since they are static and usually not changed carelessly.
可以直接使用 IPv4/v6 地址。推荐使用它们,因为它们是静态的,通常不会被随意更改。 -
Hostnames will be resolved using getaddrinfo, which means that by default, IPv6 addresses will be used first, if available (see also man gai.conf). Keep this in mind, especially when upgrading an existing cluster to IPv6.
主机名将通过 getaddrinfo 进行解析,这意味着默认情况下,如果可用,将优先使用 IPv6 地址(另见 man gai.conf)。在将现有集群升级到 IPv6 时,请特别注意这一点。
|
|
Hostnames should be used with care, since the addresses they
resolve to can be changed without touching corosync or the node it runs on -
which may lead to a situation where an address is changed without thinking
about implications for corosync. 主机名应谨慎使用,因为它们解析到的地址可以在不修改 corosync 或其运行节点的情况下更改——这可能导致地址被更改而未考虑对 corosync 的影响。 |
A separate, static hostname specifically for corosync is recommended, if
hostnames are preferred. Also, make sure that every node in the cluster can
resolve all hostnames correctly.
如果偏好使用主机名,建议为 corosync 设置一个单独的、静态的主机名。同时,确保集群中的每个节点都能正确解析所有主机名。
Since Proxmox VE 5.1, while supported, hostnames will be resolved at the time of
entry. Only the resolved IP is saved to the configuration.
自 Proxmox VE 5.1 起,虽然支持使用主机名,但主机名会在输入时被解析。配置中只保存解析后的 IP。
Nodes that joined the cluster on earlier versions likely still use their
unresolved hostname in corosync.conf. It might be a good idea to replace
them with IPs or a separate hostname, as mentioned above.
在早期版本中加入集群的节点可能仍在 corosync.conf 中使用未解析的主机名。正如上文所述,最好将其替换为 IP 或单独的主机名。
5.8. Corosync Redundancy
5.8. Corosync 冗余
Corosync supports redundant networking via its integrated Kronosnet layer by
default (it is not supported on the legacy udp/udpu transports). It can be
enabled by specifying more than one link address, either via the --linkX
parameters of pvecm, in the GUI as Link 1 (while creating a cluster or
adding a new node) or by specifying more than one ringX_addr in
corosync.conf.
Corosync 默认通过其集成的 Kronosnet 层支持冗余网络(在传统的 udp/udpu 传输上不支持)。可以通过指定多个链路地址来启用,方法是在 pvecm 的 --linkX 参数中指定,或者在 GUI 中作为链路 1(创建集群或添加新节点时),也可以在 corosync.conf 中指定多个 ringX_addr。
|
|
To provide useful failover, every link should be on its own
physical network connection. 为了实现有效的故障切换,每个链路应连接到独立的物理网络。 |
Links are used according to a priority setting. You can configure this priority
by setting knet_link_priority in the corresponding interface section in
corosync.conf, or, preferably, using the priority parameter when creating
your cluster with pvecm:
链路根据优先级设置使用。您可以通过在 corosync.conf 中相应接口部分设置 knet_link_priority 来配置此优先级,或者更推荐在使用 pvecm 创建集群时使用 priority 参数:
# pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
This would cause link1 to be used first, since it has the higher priority.
这将导致优先级更高的 link1 被优先使用。
If no priorities are configured manually (or two links have the same priority),
links will be used in order of their number, with the lower number having higher
priority.
如果没有手动配置优先级(或者两个链路具有相同的优先级),链路将按编号顺序使用,编号较低的优先级较高。
Even if all links are working, only the one with the highest priority will see
corosync traffic. Link priorities cannot be mixed, meaning that links with
different priorities will not be able to communicate with each other.
即使所有链路都正常工作,只有优先级最高的链路会传输 corosync 流量。链路优先级不能混合使用,这意味着不同优先级的链路之间无法相互通信。
Since lower priority links will not see traffic unless all higher priorities
have failed, it becomes a useful strategy to specify networks used for
other tasks (VMs, storage, etc.) as low-priority links. If worst comes to
worst, a higher latency or more congested connection might be better than no
connection at all.
由于除非所有更高优先级的链路都失败,否则较低优先级的链路不会接收流量,因此将用于其他任务(虚拟机、存储等)的网络指定为低优先级链路是一种有效策略。如果情况最糟糕,延迟更高或更拥堵的连接也可能比没有连接要好。
5.8.1. Adding Redundant Links To An Existing Cluster
5.8.1. 向现有集群添加冗余链路
To add a new link to a running configuration, first check how to
edit the corosync.conf file.
要向正在运行的配置中添加新链路,首先检查如何编辑 corosync.conf 文件。
Then, add a new ringX_addr to every node in the nodelist section. Make
sure that your X is the same for every node you add it to, and that it is
unique for each node.
然后,在 nodelist 部分的每个节点中添加一个新的 ringX_addr。确保你添加的 X 在每个节点中都相同,并且对每个节点都是唯一的。
Lastly, add a new interface, as shown below, to your totem
section, replacing X with the link number chosen above.
最后,在 totem 部分添加一个新的接口,如下所示,将 X 替换为上面选择的链路编号。
Assuming you added a link with number 1, the new configuration file could look
like this:
假设你添加了编号为 1 的链路,新的配置文件可能如下所示:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: due
nodeid: 2
quorum_votes: 1
ring0_addr: 10.10.10.2
ring1_addr: 10.20.20.2
}
node {
name: tre
nodeid: 3
quorum_votes: 1
ring0_addr: 10.10.10.3
ring1_addr: 10.20.20.3
}
node {
name: uno
nodeid: 1
quorum_votes: 1
ring0_addr: 10.10.10.1
ring1_addr: 10.20.20.1
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: testcluster
config_version: 4
ip_version: ipv4-6
secauth: on
version: 2
interface {
linknumber: 0
}
interface {
linknumber: 1
}
}
The new link will be enabled as soon as you follow the last steps to
edit the corosync.conf file. A restart should not
be necessary. You can check that corosync loaded the new link using:
只要你完成最后一步编辑 corosync.conf 文件,新链路就会被启用。通常不需要重启。你可以使用以下命令检查 corosync 是否加载了新链路:
journalctl -b -u corosync
It might be a good idea to test the new link by temporarily disconnecting the
old link on one node and making sure that its status remains online while
disconnected:
测试新链路的一个好方法是在某个节点上暂时断开旧链路,确认其状态在断开时仍然保持在线:
pvecm status
If you see a healthy cluster state, it means that your new link is being used.
如果你看到集群状态正常,说明新链路正在被使用。
5.9. Role of SSH in Proxmox VE Clusters
5.9. SSH 在 Proxmox VE 集群中的作用
Proxmox VE utilizes SSH tunnels for various features.
Proxmox VE 利用 SSH 隧道实现多种功能。
-
Proxying console/shell sessions (node and guests)
代理控制台/终端会话(节点和虚拟机)When using the shell for node B while being connected to node A, connects to a terminal proxy on node A, which is in turn connected to the login shell on node B via a non-interactive SSH tunnel.
当在连接到节点 A 的情况下使用节点 B 的 Shell 时,会连接到节点 A 上的终端代理,该代理通过非交互式 SSH 隧道连接到节点 B 上的登录 Shell。 -
VM and CT memory and local-storage migration in secure mode.
虚拟机和容器的内存及本地存储在安全模式下的迁移。During the migration, one or more SSH tunnel(s) are established between the source and target nodes, in order to exchange migration information and transfer memory and disk contents.
迁移过程中,源节点和目标节点之间会建立一个或多个 SSH 隧道,用于交换迁移信息以及传输内存和磁盘内容。 -
Storage replication 存储复制
5.9.1. SSH setup 5.9.1. SSH 设置
On Proxmox VE systems, the following changes are made to the SSH configuration/setup:
在 Proxmox VE 系统上,对 SSH 配置/设置进行了以下更改:
-
the root user’s SSH client config gets setup to prefer AES over ChaCha20
root 用户的 SSH 客户端配置被设置为优先使用 AES 而非 ChaCha20 -
the root user’s authorized_keys file gets linked to /etc/pve/priv/authorized_keys, merging all authorized keys within a cluster
root 用户的 authorized_keys 文件被链接到 /etc/pve/priv/authorized_keys,合并集群内的所有授权密钥 -
sshd is configured to allow logging in as root with a password
sshd 被配置为允许使用密码以 root 身份登录
|
|
Older systems might also have /etc/ssh/ssh_known_hosts set up as symlink
pointing to /etc/pve/priv/known_hosts, containing a merged version of all
node host keys. This system was replaced with explicit host key pinning in
pve-cluster <<INSERT VERSION>>, the symlink can be deconfigured if still in
place by running pvecm updatecerts --unmerge-known-hosts. 较旧的系统中,/etc/ssh/ssh_known_hosts 可能被设置为指向 /etc/pve/priv/known_hosts 的符号链接,后者包含所有节点主机密钥的合并版本。该系统在 pve-cluster <<INSERT VERSION>> 中被显式的主机密钥固定所取代,如果符号链接仍然存在,可以通过运行 pvecm updatecerts --unmerge-known-hosts 来取消配置。 |
5.9.2. Pitfalls due to automatic execution of .bashrc and siblings
5.9.2. 由于自动执行 .bashrc 及其相关文件导致的问题
In case you have a custom .bashrc, or similar files that get executed on
login by the configured shell, ssh will automatically run it once the session
is established successfully. This can cause some unexpected behavior, as those
commands may be executed with root permissions on any of the operations
described above. This can cause possible problematic side-effects!
如果您有自定义的 .bashrc 或类似文件,这些文件会在配置的 Shell 登录时自动执行,ssh 会在会话成功建立后自动运行它们。这可能导致一些意外行为,因为这些命令可能会以 root 权限执行上述任何操作。这可能引发潜在的问题副作用!
In order to avoid such complications, it’s recommended to add a check in
/root/.bashrc to make sure the session is interactive, and only then run
.bashrc commands.
为避免此类复杂情况,建议在 /root/.bashrc 中添加检查,确保会话是交互式的,然后才执行 .bashrc 中的命令。
You can add this snippet at the beginning of your .bashrc file:
你可以将此代码片段添加到你的 .bashrc 文件开头:
# Early exit if not running interactively to avoid side-effects!
case $- in
*i*) ;;
*) return;;
esac
5.10. Corosync External Vote Support
5.10. Corosync 外部投票支持
This section describes a way to deploy an external voter in a Proxmox VE cluster.
When configured, the cluster can sustain more node failures without
violating safety properties of the cluster communication.
本节介绍了一种在 Proxmox VE 集群中部署外部投票器的方法。配置后,集群可以在不违反集群通信安全属性的情况下承受更多节点故障。
For this to work, there are two services involved:
为使其工作,涉及两个服务:
-
A QDevice daemon which runs on each Proxmox VE node
运行在每个 Proxmox VE 节点上的 QDevice 守护进程 -
An external vote daemon which runs on an independent server
运行在独立服务器上的外部投票守护进程
As a result, you can achieve higher availability, even in smaller setups (for
example 2+1 nodes).
因此,即使在较小的环境中(例如 2+1 节点),也能实现更高的可用性。
5.10.1. QDevice Technical Overview
5.10.1. QDevice 技术概述
The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
node. It provides a configured number of votes to the cluster’s quorum
subsystem, based on an externally running third-party arbitrator’s decision.
Its primary use is to allow a cluster to sustain more node failures than
standard quorum rules allow. This can be done safely as the external device
can see all nodes and thus choose only one set of nodes to give its vote.
This will only be done if said set of nodes can have quorum (again) after
receiving the third-party vote.
Corosync 仲裁设备(QDevice)是运行在每个集群节点上的守护进程。它根据外部运行的第三方仲裁器的决策,向集群的仲裁子系统提供配置数量的投票。其主要用途是允许集群容忍比标准仲裁规则更多的节点故障。这可以安全地实现,因为外部设备可以看到所有节点,从而只选择一组节点来投票。只有当该节点集合在接收到第三方投票后能够重新获得仲裁权时,才会进行投票。
Currently, only QDevice Net is supported as a third-party arbitrator. This is
a daemon which provides a vote to a cluster partition, if it can reach the
partition members over the network. It will only give votes to one partition
of a cluster at any time.
It’s designed to support multiple clusters and is almost configuration and
state free. New clusters are handled dynamically and no configuration file
is needed on the host running a QDevice.
目前,只有 QDevice Net 被支持作为第三方仲裁器。它是一个守护进程,如果能够通过网络访问集群分区的成员,则向该分区提供投票。它在任何时候只会向集群的一个分区投票。该设计支持多个集群,几乎不需要配置和状态管理。新集群会被动态处理,运行 QDevice 的主机上无需配置文件。
The only requirements for the external host are that it needs network access to
the cluster and to have a corosync-qnetd package available. We provide a package
for Debian based hosts, and other Linux distributions should also have a package
available through their respective package manager.
外部主机的唯一要求是需要能够访问集群网络,并且必须有可用的 corosync-qnetd 包。我们为基于 Debian 的主机提供了该包,其他 Linux 发行版也应通过各自的包管理器提供相应的包。
|
|
Unlike corosync itself, a QDevice connects to the cluster over TCP/IP.
The daemon can also run outside the LAN of the cluster and isn’t limited to the
low latencies requirements of corosync. 与 corosync 本身不同,QDevice 通过 TCP/IP 连接到集群。该守护进程也可以运行在集群局域网之外,不受 corosync 对低延迟的限制。 |
5.10.2. Supported Setups
5.10.2. 支持的配置
We support QDevices for clusters with an even number of nodes and recommend
it for 2 node clusters, if they should provide higher availability.
For clusters with an odd node count, we currently discourage the use of
QDevices. The reason for this is the difference in the votes which the QDevice
provides for each cluster type. Even numbered clusters get a single additional
vote, which only increases availability, because if the QDevice
itself fails, you are in the same position as with no QDevice at all.
我们支持偶数节点的集群使用 QDevice,并建议在需要更高可用性的双节点集群中使用。对于奇数节点的集群,我们目前不建议使用 QDevice。原因在于 QDevice 对不同类型集群所提供的投票权不同。偶数节点集群获得一个额外的投票权,这只会提高可用性,因为如果 QDevice 本身发生故障,情况与没有 QDevice 时相同。
On the other hand, with an odd numbered cluster size, the QDevice provides
(N-1) votes — where N corresponds to the cluster node count. This
alternative behavior makes sense; if it had only one additional vote, the
cluster could get into a split-brain situation. This algorithm allows for all
nodes but one (and naturally the QDevice itself) to fail. However, there are two
drawbacks to this:
另一方面,对于奇数个节点的集群大小,QDevice 提供 (N-1) 票数——其中 N 对应集群节点数。这个替代行为是合理的;如果它只有一个额外的票数,集群可能会进入脑裂状态。该算法允许除一个节点外的所有节点(以及 QDevice 本身)都失败。然而,这有两个缺点:
-
If the QNet daemon itself fails, no other node may fail or the cluster immediately loses quorum. For example, in a cluster with 15 nodes, 7 could fail before the cluster becomes inquorate. But, if a QDevice is configured here and it itself fails, no single node of the 15 may fail. The QDevice acts almost as a single point of failure in this case.
如果 QNet 守护进程本身失败,其他任何节点都不能失败,否则集群会立即失去法定人数。例如,在一个有 15 个节点的集群中,最多可以有 7 个节点失败,集群才会失去法定人数。但如果这里配置了 QDevice 并且它本身失败,15 个节点中任何一个节点都不能失败。在这种情况下,QDevice 几乎成为单点故障。 -
The fact that all but one node plus QDevice may fail sounds promising at first, but this may result in a mass recovery of HA services, which could overload the single remaining node. Furthermore, a Ceph server will stop providing services if only ((N-1)/2) nodes or less remain online.
除一个节点外加 QDevice 都可能失败这一点乍看之下很有吸引力,但这可能导致 HA 服务的大规模恢复,从而可能使唯一剩余的节点过载。此外,如果只剩下 ((N-1)/2) 个或更少的节点在线,Ceph 服务器将停止提供服务。
If you understand the drawbacks and implications, you can decide yourself if
you want to use this technology in an odd numbered cluster setup.
如果你理解这些缺点和影响,可以自行决定是否在奇数节点的集群设置中使用这项技术。
5.10.3. QDevice-Net Setup
5.10.3. QDevice-Net 设置
We recommend running any daemon which provides votes to corosync-qdevice as an
unprivileged user. Proxmox VE and Debian provide a package which is already
configured to do so.
The traffic between the daemon and the cluster must be encrypted to ensure a
safe and secure integration of the QDevice in Proxmox VE.
我们建议以非特权用户身份运行任何向 corosync-qdevice 提供投票的守护进程。Proxmox VE 和 Debian 提供了一个已配置好的包来实现这一点。守护进程与集群之间的通信必须加密,以确保 QDevice 在 Proxmox VE 中的安全集成。
First, install the corosync-qnetd package on your external server
首先,在您的外部服务器上安装 corosync-qnetd 包
external# apt install corosync-qnetd
and the corosync-qdevice package on all cluster nodes
然后在所有集群节点上安装 corosync-qdevice 包
pve# apt install corosync-qdevice
After doing this, ensure that all the nodes in the cluster are online.
完成此操作后,确保集群中的所有节点均已上线。
You can now set up your QDevice by running the following command on one
of the Proxmox VE nodes:
现在,您可以在其中一个 Proxmox VE 节点上运行以下命令来设置您的 QDevice:
pve# pvecm qdevice setup <QDEVICE-IP>
The SSH key from the cluster will be automatically copied to the QDevice.
集群的 SSH 密钥将自动复制到 QDevice。
|
|
Make sure to setup key-based access for the root user on your external
server, or temporarily allow root login with password during the setup phase.
If you receive an error such as Host key verification failed. at this
stage, running pvecm updatecerts could fix the issue. 请确保在您的外部服务器上为 root 用户设置基于密钥的访问,或者在设置阶段临时允许 root 用户使用密码登录。如果此时收到类似“Host key verification failed.”的错误,运行 pvecm updatecerts 可能会解决该问题。 |
After all the steps have successfully completed, you will see "Done". You can
verify that the QDevice has been set up with:
所有步骤成功完成后,您将看到“完成”。您可以通过以下命令验证 QDevice 是否已设置:
pve# pvecm status
...
Votequorum information
~~~~~~~~~~~~~~~~~~~~~
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate Qdevice
Membership information
~~~~~~~~~~~~~~~~~~~~~~
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 192.168.22.180 (local)
0x00000002 1 A,V,NMW 192.168.22.181
0x00000000 1 Qdevice
QDevice Status Flags QDevice 状态标志
The status output of the QDevice, as seen above, will usually contain three
columns:
如上所示,QDevice 的状态输出通常包含三列:
-
A / NA: Alive or Not Alive. Indicates if the communication to the external corosync-qnetd daemon works.
A / NA:存活或未存活。表示与外部 corosync-qnetd 守护进程的通信是否正常。 -
V / NV: If the QDevice will cast a vote for the node. In a split-brain situation, where the corosync connection between the nodes is down, but they both can still communicate with the external corosync-qnetd daemon, only one node will get the vote.
V / NV:QDevice 是否会为节点投票。在分脑情况下,当节点之间的 corosync 连接断开,但它们仍能与外部的 corosync-qnetd 守护进程通信时,只有一个节点会获得投票权。 -
MW / NMW: Master wins (MV) or not (NMW). Default is NMW, see [12].
MW / NMW:主节点获胜(MV)或不获胜(NMW)。默认是 NMW,详见[12]。 -
NR: QDevice is not registered.
NR:QDevice 未注册。
|
|
If your QDevice is listed as Not Alive (NA in the output above),
ensure that port 5403 (the default port of the qnetd server) of your external
server is reachable via TCP/IP! 如果您的 QDevice 显示为未存活(上述输出中的 NA),请确保您的外部服务器的 5403 端口(qnetd 服务器的默认端口)可以通过 TCP/IP 访问! |
5.10.4. Frequently Asked Questions
5.10.4. 常见问题解答
Tie Breaking 平局决胜
In case of a tie, where two same-sized cluster partitions cannot see each other
but can see the QDevice, the QDevice chooses one of those partitions randomly
and provides a vote to it.
在出现平局的情况下,如果两个大小相同的集群分区彼此不可见,但可以看到 QDevice,QDevice 会随机选择其中一个分区并为其投票。
Possible Negative Implications
可能的负面影响
For clusters with an even node count, there are no negative implications when
using a QDevice. If it fails to work, it is the same as not having a QDevice
at all.
对于节点数量为偶数的集群,使用 QDevice 不会带来负面影响。如果 QDevice 无法正常工作,其效果等同于根本没有使用 QDevice。
Adding/Deleting Nodes After QDevice Setup
在设置 QDevice 后添加/删除节点
If you want to add a new node or remove an existing one from a cluster with a
QDevice setup, you need to remove the QDevice first. After that, you can add or
remove nodes normally. Once you have a cluster with an even node count again,
you can set up the QDevice again as described previously.
如果您想在已设置 QDevice 的集群中添加新节点或删除现有节点,首先需要移除 QDevice。之后,您可以正常添加或删除节点。一旦集群再次拥有偶数个节点,您可以按照之前描述的方法重新设置 QDevice。
5.11. Corosync Configuration
5.11. Corosync 配置
The /etc/pve/corosync.conf file plays a central role in a Proxmox VE cluster. It
controls the cluster membership and its network.
For further information about it, check the corosync.conf man page:
/etc/pve/corosync.conf 文件在 Proxmox VE 集群中起着核心作用。它控制集群成员资格及其网络。有关更多信息,请查看 corosync.conf 的手册页:
man corosync.confFor node membership, you should always use the pvecm tool provided by Proxmox VE.
You may have to edit the configuration file manually for other changes.
Here are a few best practice tips for doing this.
对于节点成员资格,您应始终使用 Proxmox VE 提供的 pvecm 工具。对于其他更改,您可能需要手动编辑配置文件。以下是一些最佳实践建议。
5.11.1. Edit corosync.conf
5.11.1. 编辑 corosync.conf
Editing the corosync.conf file is not always very straightforward. There are
two on each cluster node, one in /etc/pve/corosync.conf and the other in
/etc/corosync/corosync.conf. Editing the one in our cluster file system will
propagate the changes to the local one, but not vice versa.
编辑 corosync.conf 文件并不总是很简单。每个集群节点上有两个文件,一个位于 /etc/pve/corosync.conf,另一个位于 /etc/corosync/corosync.conf。编辑集群文件系统中的那个文件会将更改传播到本地文件,但反之则不然。
The configuration will get updated automatically, as soon as the file changes.
This means that changes which can be integrated in a running corosync will take
effect immediately. Thus, you should always make a copy and edit that instead,
to avoid triggering unintended changes when saving the file while editing.
配置会在文件更改后自动更新。这意味着可以集成到正在运行的 corosync 中的更改会立即生效。因此,您应始终先复制一份文件并编辑复制件,以避免在编辑时保存文件触发意外更改。
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
Then, open the config file with your favorite editor, such as nano or
vim.tiny, which come pre-installed on every Proxmox VE node.
然后,使用您喜欢的编辑器打开配置文件,比如 nano 或 vim.tiny,这些编辑器在每个 Proxmox VE 节点上都预装了。
|
|
Always increment the config_version number after configuration changes;
omitting this can lead to problems. 每次配置更改后务必增加 config_version 编号;否则可能导致问题。 |
After making the necessary changes, create another copy of the current working
configuration file. This serves as a backup if the new configuration fails to
apply or causes other issues.
在进行必要的更改后,创建当前工作配置文件的另一个副本。这样如果新配置无法应用或导致其他问题,可以作为备份。
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
Then replace the old configuration file with the new one:
然后用新的配置文件替换旧的配置文件:
mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
You can check if the changes could be applied automatically, using the following
commands:
您可以使用以下命令检查更改是否能够自动应用:
systemctl status corosync journalctl -b -u corosync
If the changes could not be applied automatically, you may have to restart the
corosync service via:
如果更改无法自动应用,您可能需要通过以下命令重启 corosync 服务:
systemctl restart corosync
On errors, check the troubleshooting section below.
出现错误时,请检查下面的故障排除部分。
5.11.2. Troubleshooting 5.11.2. 故障排除
Issue: quorum.expected_votes must be configured
问题:必须配置 quorum.expected_votes
When corosync starts to fail and you get the following message in the system log:
当 corosync 启动失败,并且您在系统日志中看到以下信息时:
[...]
corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
'configuration error: nodelist or quorum.expected_votes must be configured!'
[...]
It means that the hostname you set for a corosync ringX_addr in the
configuration could not be resolved.
这意味着您为配置中的 corosync ringX_addr 设置的主机名无法解析。
Write Configuration When Not Quorate
在无法达成法定人数时写入配置
If you need to change /etc/pve/corosync.conf on a node with no quorum, and you
understand what you are doing, use:
如果您需要在无法达成法定人数的节点上更改 /etc/pve/corosync.conf,并且您清楚自己在做什么,请使用:
pvecm expected 1This sets the expected vote count to 1 and makes the cluster quorate. You can
then fix your configuration, or revert it back to the last working backup.
这将预期投票数设置为 1,使集群达到法定人数。然后,您可以修复配置,或将其恢复到最后一个可用的备份。
This is not enough if corosync cannot start anymore. In that case, it is best to
edit the local copy of the corosync configuration in
/etc/corosync/corosync.conf, so that corosync can start again. Ensure that on
all nodes, this configuration has the same content to avoid split-brain
situations.
如果 corosync 无法启动,仅此是不够的。在这种情况下,最好编辑本地的 corosync 配置文件 /etc/corosync/corosync.conf,以便 corosync 能够重新启动。确保所有节点上的该配置内容相同,以避免脑裂情况。
5.12. Cluster Cold Start
5.12. 集群冷启动
It is obvious that a cluster is not quorate when all nodes are
offline. This is a common case after a power failure.
当所有节点都离线时,集群显然没有法定人数。这是在断电后常见的情况。
|
|
It is always a good idea to use an uninterruptible power supply
(“UPS”, also called “battery backup”) to avoid this state, especially if
you want HA. 使用不间断电源(“UPS”,也称为“电池备份”)来避免这种状态总是一个好主意,尤其是当你需要高可用性(HA)时。 |
On node startup, the pve-guests service is started and waits for
quorum. Once quorate, it starts all guests which have the onboot
flag set.
节点启动时,会启动 pve-guests 服务并等待法定人数。一旦达到法定人数,它会启动所有设置了开机启动标志的虚拟机。
When you turn on nodes, or when power comes back after power failure,
it is likely that some nodes will boot faster than others. Please keep in
mind that guest startup is delayed until you reach quorum.
当您开启节点,或断电后电源恢复时,某些节点启动速度可能会比其他节点快。请记住,来宾启动会延迟,直到达到法定人数。
5.13. Guest VMID Auto-Selection
5.13. 来宾 VMID 自动选择
When creating new guests the web interface will ask the backend for a free VMID
automatically. The default range for searching is 100 to 1000000 (lower
than the maximal allowed VMID enforced by the schema).
创建新来宾时,网页界面会自动向后端请求一个空闲的 VMID。默认的搜索范围是 100 到 1000000(低于架构强制执行的最大允许 VMID)。
Sometimes admins either want to allocate new VMIDs in a separate range, for
example to easily separate temporary VMs with ones that choose a VMID manually.
Other times its just desired to provided a stable length VMID, for which
setting the lower boundary to, for example, 100000 gives much more room for.
有时管理员希望在单独的范围内分配新的 VMID,例如为了轻松区分临时虚拟机和手动选择 VMID 的虚拟机。另一些时候,则希望提供一个固定长度的 VMID,例如将下限设置为 100000,可以提供更大的空间。
To accommodate this use case one can set either lower, upper or both boundaries
via the datacenter.cfg configuration file, which can be edited in the web
interface under Datacenter → Options.
为了满足此用例,可以通过 datacenter.cfg 配置文件设置下限、上限或两者的边界,该文件可以在 Web 界面的 Datacenter → Options 中编辑。
|
|
The range is only used for the next-id API call, so it isn’t a hard
limit. 该范围仅用于 next-id API 调用,因此它不是一个硬性限制。 |
5.14. Guest Migration 5.14. 客户机迁移
Migrating virtual guests to other nodes is a useful feature in a
cluster. There are settings to control the behavior of such
migrations. This can be done via the configuration file
datacenter.cfg or for a specific migration via API or command-line
parameters.
将虚拟客户机迁移到其他节点是集群中的一个有用功能。可以通过设置来控制此类迁移的行为。此操作可以通过配置文件 datacenter.cfg 完成,或者针对特定迁移通过 API 或命令行参数进行设置。
It makes a difference if a guest is online or offline, or if it has
local resources (like a local disk).
如果客户机是在线还是离线,或者是否有本地资源(如本地磁盘),这会有所不同。
For details about virtual machine migration, see the
QEMU/KVM Migration Chapter.
有关虚拟机迁移的详细信息,请参见 QEMU/KVM 迁移章节。
For details about container migration, see the
Container Migration Chapter.
有关容器迁移的详细信息,请参见容器迁移章节。
5.14.1. Migration Type 5.14.1. 迁移类型
The migration type defines if the migration data should be sent over an
encrypted (secure) channel or an unencrypted (insecure) one.
Setting the migration type to insecure means that the RAM content of a
virtual guest is also transferred unencrypted, which can lead to
information disclosure of critical data from inside the guest (for
example, passwords or encryption keys).
迁移类型定义了迁移数据是通过加密(安全)通道还是未加密(不安全)通道发送。将迁移类型设置为不安全意味着虚拟客户机的内存内容也会以未加密的方式传输,这可能导致客户机内部关键数据(例如密码或加密密钥)泄露。
Therefore, we strongly recommend using the secure channel if you do
not have full control over the network and can not guarantee that no
one is eavesdropping on it.
因此,如果您无法完全控制网络,且无法保证没有人监听网络,我们强烈建议使用安全通道。
|
|
Storage migration does not follow this setting. Currently, it
always sends the storage content over a secure channel. 存储迁移不遵循此设置。目前,它始终通过安全通道发送存储内容。 |
Encryption requires a lot of computing power, so this setting is often
changed to insecure to achieve better performance. The impact on
modern systems is lower because they implement AES encryption in
hardware. The performance impact is particularly evident in fast
networks, where you can transfer 10 Gbps or more.
加密需要大量计算能力,因此此设置常被更改为不安全以获得更好的性能。对现代系统的影响较小,因为它们在硬件中实现了 AES 加密。性能影响在高速网络中尤为明显,例如可以传输 10 Gbps 或更高速度的网络。
5.14.2. Migration Network
5.14.2. 迁移网络
By default, Proxmox VE uses the network in which cluster communication
takes place to send the migration traffic. This is not optimal both because
sensitive cluster traffic can be disrupted and this network may not
have the best bandwidth available on the node.
默认情况下,Proxmox VE 使用集群通信所在的网络来发送迁移流量。这并不理想,因为敏感的集群流量可能会被干扰,而且该网络在节点上可能没有最佳的带宽。
Setting the migration network parameter allows the use of a dedicated
network for all migration traffic. In addition to the memory,
this also affects the storage traffic for offline migrations.
设置迁移网络参数可以为所有迁移流量使用专用网络。除了内存迁移外,这也会影响离线迁移的存储流量。
The migration network is set as a network using CIDR notation. This
has the advantage that you don’t have to set individual IP addresses
for each node. Proxmox VE can determine the real address on the
destination node from the network specified in the CIDR form. To
enable this, the network must be specified so that each node has exactly one
IP in the respective network.
迁移网络以 CIDR 表示法设置为一个网络。这有一个优点,即不必为每个节点设置单独的 IP 地址。Proxmox VE 可以根据 CIDR 形式指定的网络确定目标节点上的真实地址。为启用此功能,必须指定网络,使得每个节点在相应网络中恰好有一个 IP。
Example 示例
We assume that we have a three-node setup, with three separate
networks. One for public communication with the Internet, one for
cluster communication, and a very fast one, which we want to use as a
dedicated network for migration.
我们假设有一个三节点的设置,包含三个独立的网络。一个用于与互联网的公共通信,一个用于集群通信,还有一个非常快速的网络,我们希望将其用作专用的迁移网络。
A network configuration for such a setup might look as follows:
这种设置的网络配置可能如下所示:
iface eno1 inet manual
# public network
auto vmbr0
iface vmbr0 inet static
address 192.X.Y.57/24
gateway 192.X.Y.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
# cluster network
auto eno2
iface eno2 inet static
address 10.1.1.1/24
# fast network
auto eno3
iface eno3 inet static
address 10.1.2.1/24
Here, we will use the network 10.1.2.0/24 as a migration network. For
a single migration, you can do this using the migration_network
parameter of the command-line tool:
这里,我们将使用网络 10.1.2.0/24 作为迁移网络。对于单次迁移,可以使用命令行工具的 migration_network 参数来实现:
# qm migrate 106 tre --online --migration_network 10.1.2.0/24
To configure this as the default network for all migrations in the
cluster, set the migration property of the /etc/pve/datacenter.cfg
file:
要将此配置为集群中所有迁移的默认网络,请设置 /etc/pve/datacenter.cfg 文件的 migration 属性:
# use dedicated migration network migration: secure,network=10.1.2.0/24
|
|
The migration type must always be set when the migration network
is set in /etc/pve/datacenter.cfg. 当在 /etc/pve/datacenter.cfg 中设置迁移网络时,必须始终设置迁移类型。 |
6. Proxmox Cluster File System (pmxcfs)
6. Proxmox 集群文件系统(pmxcfs)
The Proxmox Cluster file system (“pmxcfs”) is a database-driven file
system for storing configuration files, replicated in real time to all
cluster nodes using corosync. We use this to store all Proxmox VE related
configuration files.
Proxmox 集群文件系统(“pmxcfs”)是一个基于数据库的文件系统,用于存储配置文件,使用 corosync 实时复制到所有集群节点。我们使用它来存储所有与 Proxmox VE 相关的配置文件。
Although the file system stores all data inside a persistent database on disk,
a copy of the data resides in RAM. This imposes restrictions on the maximum
size, which is currently 128 MiB. This is still enough to store the
configuration of several thousand virtual machines.
虽然文件系统将所有数据存储在磁盘上的持久数据库中,但数据的副本保存在内存中。这对最大大小施加了限制,目前为 128 MiB。这个大小仍然足以存储数千台虚拟机的配置。
This system provides the following advantages:
该系统提供以下优势:
-
Seamless replication of all configuration to all nodes in real time
所有配置实时无缝复制到所有节点 -
Provides strong consistency checks to avoid duplicate VM IDs
提供强一致性检查以避免重复的虚拟机 ID -
Read-only when a node loses quorum
当节点失去法定人数时为只读 -
Automatic updates of the corosync cluster configuration to all nodes
自动将 corosync 集群配置更新到所有节点 -
Includes a distributed locking mechanism
包含分布式锁机制
6.1. POSIX Compatibility
6.1. POSIX 兼容性
The file system is based on FUSE, so the behavior is POSIX like. But
some feature are simply not implemented, because we do not need them:
文件系统基于 FUSE,因此行为类似于 POSIX。但有些功能根本没有实现,因为我们不需要它们:
-
You can just generate normal files and directories, but no symbolic links, …
你只能生成普通文件和目录,但不能生成符号链接,…… -
You can’t rename non-empty directories (because this makes it easier to guarantee that VMIDs are unique).
你不能重命名非空目录(因为这样更容易保证 VMID 的唯一性)。 -
You can’t change file permissions (permissions are based on paths)
你不能更改文件权限(权限基于路径)。 -
O_EXCL creates were not atomic (like old NFS)
O_EXCL 创建不是原子操作(类似旧的 NFS) -
O_TRUNC creates are not atomic (FUSE restriction)
O_TRUNC 创建不是原子操作(FUSE 限制)
6.2. File Access Rights
6.2. 文件访问权限
All files and directories are owned by user root and have group
www-data. Only root has write permissions, but group www-data can
read most files. Files below the following paths are only accessible by root:
所有文件和目录均归用户 root 所有,组为 www-data。只有 root 拥有写权限,但组 www-data 可以读取大多数文件。以下路径下的文件仅 root 可访问:
/etc/pve/priv/
/etc/pve/nodes/${NAME}/priv/
6.3. Technology 6.3. 技术
We use the Corosync Cluster Engine for
cluster communication, and SQlite for the
database file. The file system is implemented in user space using
FUSE.
我们使用 Corosync 集群引擎进行集群通信,使用 SQLite 作为数据库文件。文件系统通过 FUSE 在用户空间实现。
6.4. File System Layout
6.4. 文件系统布局
The file system is mounted at:
文件系统挂载在:
/etc/pve
6.4.1. Files 6.4.1. 文件
authkey.pub |
Public key used by the ticket system |
ceph.conf |
Ceph configuration file (note: /etc/ceph/ceph.conf is a symbolic link to this) |
corosync.conf |
Corosync cluster configuration file (prior to Proxmox VE 4.x, this file was called cluster.conf) |
datacenter.cfg |
Proxmox VE datacenter-wide configuration (keyboard layout, proxy, …) |
domains.cfg |
Proxmox VE authentication domains |
firewall/cluster.fw |
Firewall configuration applied to all nodes |
firewall/<NAME>.fw |
Firewall configuration for individual nodes |
firewall/<VMID>.fw |
Firewall configuration for VMs and containers |
ha/crm_commands |
Displays HA operations that are currently being carried out by the CRM |
ha/manager_status |
JSON-formatted information regarding HA services on the cluster |
ha/resources.cfg |
Resources managed by high availability, and their current state |
nodes/<NAME>/config |
Node-specific configuration |
nodes/<NAME>/lxc/<VMID>.conf |
VM configuration data for LXC containers |
nodes/<NAME>/openvz/ |
Prior to Proxmox VE 4.0, used for container configuration data (deprecated, removed soon) |
nodes/<NAME>/pve-ssl.key |
Private SSL key for pve-ssl.pem |
nodes/<NAME>/pve-ssl.pem |
Public SSL certificate for web server (signed by cluster CA) |
nodes/<NAME>/pveproxy-ssl.key |
Private SSL key for pveproxy-ssl.pem (optional) |
nodes/<NAME>/pveproxy-ssl.pem |
Public SSL certificate (chain) for web server (optional override for pve-ssl.pem) |
nodes/<NAME>/qemu-server/<VMID>.conf |
VM configuration data for KVM VMs |
priv/authkey.key |
Private key used by ticket system |
priv/authorized_keys |
SSH keys of cluster members for authentication |
priv/ceph* |
Ceph authentication keys and associated capabilities |
priv/known_hosts |
SSH keys of the cluster members for verification |
priv/lock/* |
Lock files used by various services to ensure safe cluster-wide operations |
priv/pve-root-ca.key |
Private key of cluster CA |
priv/shadow.cfg |
Shadow password file for PVE Realm users |
priv/storage/<STORAGE-ID>.pw |
Contains the password of a storage in plain text |
priv/tfa.cfg |
Base64-encoded two-factor authentication configuration |
priv/token.cfg |
API token secrets of all tokens |
pve-root-ca.pem |
Public certificate of cluster CA |
pve-www.key |
Private key used for generating CSRF tokens |
sdn/* |
Shared configuration files for Software Defined Networking (SDN) |
status.cfg |
Proxmox VE external metrics server configuration |
storage.cfg |
Proxmox VE storage configuration |
user.cfg |
Proxmox VE access control configuration (users/groups/…) |
virtual-guest/cpu-models.conf |
For storing custom CPU models |
vzdump.cron |
Cluster-wide vzdump backup-job schedule |
6.4.2. Symbolic links 6.4.2. 符号链接
Certain directories within the cluster file system use symbolic links, in order
to point to a node’s own configuration files. Thus, the files pointed to in the
table below refer to different files on each node of the cluster.
集群文件系统中的某些目录使用符号链接,以指向节点自身的配置文件。因此,下表中指向的文件在集群的每个节点上都指向不同的文件。
local |
nodes/<LOCAL_HOST_NAME> |
lxc |
nodes/<LOCAL_HOST_NAME>/lxc/ |
openvz |
nodes/<LOCAL_HOST_NAME>/openvz/ (deprecated, removed soon) |
qemu-server |
nodes/<LOCAL_HOST_NAME>/qemu-server/ |
6.4.3. Special status files for debugging (JSON)
6.4.3. 用于调试的特殊状态文件(JSON)
.version |
File versions (to detect file modifications) |
.members |
Info about cluster members |
.vmlist |
List of all VMs 所有虚拟机列表 |
.clusterlog |
Cluster log (last 50 entries) |
.rrd |
RRD data (most recent entries) |
6.5. Recovery 6.5. 恢复
If you have major problems with your Proxmox VE host, for example hardware
issues, it could be helpful to copy the pmxcfs database file
/var/lib/pve-cluster/config.db, and move it to a new Proxmox VE
host. On the new host (with nothing running), you need to stop the
pve-cluster service and replace the config.db file (required permissions
0600). Following this, adapt /etc/hostname and /etc/hosts according to the
lost Proxmox VE host, then reboot and check (and don’t forget your
VM/CT data).
如果您的 Proxmox VE 主机出现重大问题,例如硬件故障,复制 pmxcfs 数据库文件 /var/lib/pve-cluster/config.db 并将其移动到新的 Proxmox VE 主机可能会有所帮助。在新主机上(没有任何服务运行时),您需要停止 pve-cluster 服务并替换 config.db 文件(所需权限为 0600)。随后,根据丢失的 Proxmox VE 主机调整 /etc/hostname 和 /etc/hosts,然后重启并检查(别忘了您的虚拟机/容器数据)。
6.5.1. Remove Cluster Configuration
6.5.1. 移除集群配置
The recommended way is to reinstall the node after you remove it from
your cluster. This ensures that all secret cluster/ssh keys and any
shared configuration data is destroyed.
推荐的做法是在将节点从集群中移除后重新安装该节点。这可以确保所有秘密的集群/ssh 密钥以及任何共享的配置数据被销毁。
In some cases, you might prefer to put a node back to local mode without
reinstalling, which is described in
Separate A Node Without Reinstalling
在某些情况下,您可能更愿意在不重新安装的情况下将节点切换回本地模式,相关内容请参见《不重新安装的情况下分离节点》。
6.5.2. Recovering/Moving Guests from Failed Nodes
6.5.2. 从故障节点恢复/迁移客户机
For the guest configuration files in nodes/<NAME>/qemu-server/ (VMs) and
nodes/<NAME>/lxc/ (containers), Proxmox VE sees the containing node <NAME> as the
owner of the respective guest. This concept enables the usage of local locks
instead of expensive cluster-wide locks for preventing concurrent guest
configuration changes.
对于位于 nodes/<NAME>/qemu-server/(虚拟机)和 nodes/<NAME>/lxc/(容器)中的客户机配置文件,Proxmox VE 将包含它们的节点 <NAME> 视为相应客户机的所有者。该概念使得可以使用本地锁,而非昂贵的集群范围锁,来防止并发的客户机配置更改。
As a consequence, if the owning node of a guest fails (for example, due to a power
outage, fencing event, etc.), a regular migration is not possible (even if all
the disks are located on shared storage), because such a local lock on the
(offline) owning node is unobtainable. This is not a problem for HA-managed
guests, as Proxmox VE’s High Availability stack includes the necessary
(cluster-wide) locking and watchdog functionality to ensure correct and
automatic recovery of guests from fenced nodes.
因此,如果客户机的所有节点发生故障(例如,由于断电、隔离事件等),则无法进行常规迁移(即使所有磁盘都位于共享存储上),因为无法获得(离线)所有节点上的本地锁。这对于由高可用性管理的客户机来说不是问题,因为 Proxmox VE 的高可用性堆栈包含必要的(集群范围)锁定和看门狗功能,以确保从被隔离节点正确且自动地恢复客户机。
If a non-HA-managed guest has only shared disks (and no other local resources
which are only available on the failed node), a manual recovery
is possible by simply moving the guest configuration file from the failed
node’s directory in /etc/pve/ to an online node’s directory (which changes the
logical owner or location of the guest).
如果一个非高可用管理的虚拟机只有共享磁盘(且没有其他仅在故障节点上可用的本地资源),则可以通过将虚拟机配置文件从故障节点的 /etc/pve/ 目录移动到在线节点的目录来手动恢复(这会更改虚拟机的逻辑所有者或位置)。
For example, recovering the VM with ID 100 from an offline node1 to another
node node2 works by running the following command as root on any member node
of the cluster:
例如,要将 ID 为 100 的虚拟机从离线的 node1 恢复到另一个节点 node2,可以在集群的任意成员节点上以 root 身份运行以下命令:
mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/qemu-server/
|
|
Before manually recovering a guest like this, make absolutely sure
that the failed source node is really powered off/fenced. Otherwise Proxmox VE’s
locking principles are violated by the mv command, which can have unexpected
consequences. 在手动恢复虚拟机之前,务必确认故障的源节点确实已关闭或被隔离。否则,mv 命令会违反 Proxmox VE 的锁定原则,可能导致意想不到的后果。 |
|
|
Guests with local disks (or other local resources which are only
available on the offline node) are not recoverable like this. Either wait for the
failed node to rejoin the cluster or restore such guests from backups. 带有本地磁盘(或其他仅在离线节点上可用的本地资源)的虚拟机无法通过此方法恢复。只能等待故障节点重新加入集群,或从备份中恢复此类虚拟机。 |
7. Proxmox VE Storage
7. Proxmox VE 存储
The Proxmox VE storage model is very flexible. Virtual machine images
can either be stored on one or several local storages, or on shared
storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
configure as many storage pools as you like. You can use all
storage technologies available for Debian Linux.
Proxmox VE 的存储模型非常灵活。虚拟机镜像可以存储在一个或多个本地存储上,也可以存储在共享存储上,如 NFS 或 iSCSI(NAS,SAN)。没有限制,您可以配置任意数量的存储池。您可以使用 Debian Linux 支持的所有存储技术。
One major benefit of storing VMs on shared storage is the ability to
live-migrate running machines without any downtime, as all nodes in
the cluster have direct access to VM disk images. There is no need to
copy VM image data, so live migration is very fast in that case.
将虚拟机存储在共享存储上的一个主要好处是能够在不中断运行的情况下进行实时迁移,因为集群中的所有节点都可以直接访问虚拟机磁盘镜像。无需复制虚拟机镜像数据,因此实时迁移非常快速。
The storage library (package libpve-storage-perl) uses a flexible
plugin system to provide a common interface to all storage types. This
can be easily adopted to include further storage types in the future.
存储库(包 libpve-storage-perl)使用灵活的插件系统,为所有存储类型提供统一接口。未来可以轻松扩展以支持更多存储类型。
7.1. Storage Types 7.1. 存储类型
There are basically two different classes of storage types:
基本上有两种不同类别的存储类型:
- File level storage 文件级存储
-
File level based storage technologies allow access to a fully featured (POSIX) file system. They are in general more flexible than any Block level storage (see below), and allow you to store content of any type. ZFS is probably the most advanced system, and it has full support for snapshots and clones.
基于文件级的存储技术允许访问功能齐全的(POSIX)文件系统。它们通常比任何块级存储(见下文)更灵活,并且允许存储任何类型的内容。ZFS 可能是最先进的系统,且完全支持快照和克隆。 - Block level storage 块级存储
-
Allows to store large raw images. It is usually not possible to store other files (ISO, backups, ..) on such storage types. Most modern block level storage implementations support snapshots and clones. RADOS and GlusterFS are distributed systems, replicating storage data to different nodes.
允许存储大型原始镜像。通常无法在此类存储类型上存储其他文件(ISO、备份等)。大多数现代块级存储实现支持快照和克隆。RADOS 和 GlusterFS 是分布式系统,将存储数据复制到不同节点。
| Description 描述 | Plugin type 插件类型 | Level 级别 | Shared 共享 | Snapshots 快照 | Stable 稳定 |
|---|---|---|---|---|---|
ZFS (local) ZFS(本地) |
zfspool |
both1 两者 1 |
no 否 |
yes 是 |
yes 是 |
Directory 目录 |
dir 目录 |
file 文件 |
no 否 |
no2 否 2 |
yes 是 |
BTRFS |
btrfs |
file 文件 |
no 否 |
yes 是 |
technology preview 技术预览 |
NFS |
nfs |
file 文件 |
yes 是 |
no2 否 2 |
yes 是 |
CIFS |
cifs |
file 文件 |
yes 是 |
no2 否 2 |
yes 是 |
Proxmox Backup Proxmox 备份 |
pbs |
both 两者 |
yes 是 |
n/a 不适用 |
yes 是 |
GlusterFS |
glusterfs |
file 文件 |
yes 是 |
no2 否 2 |
yes 是 |
CephFS |
cephfs |
file 文件 |
yes 是 |
yes 是 |
yes 是 |
LVM |
lvm |
block 区块 |
no3 无 3 |
no 无 |
yes 是 |
LVM-thin |
lvmthin |
block 区块 |
no 否 |
yes 是 |
yes 是 |
iSCSI/kernel iSCSI/内核 |
iscsi |
block 块 |
yes 是 |
no 否 |
yes 是 |
iSCSI/libiscsi |
iscsidirect |
block 区块 |
yes 是 |
no 否 |
yes 是 |
Ceph/RBD |
rbd |
block 区块 |
yes 是 |
yes 是 |
yes 是 |
ZFS over iSCSI 基于 iSCSI 的 ZFS |
zfs |
block 块 |
yes 是 |
yes 是 |
yes 是 |
1: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
block device functionality.
1 :虚拟机的磁盘镜像存储在 ZFS 卷(zvol)数据集中,这些数据集提供块设备功能。
2: On file based storages, snapshots are possible with the qcow2 format.
2 :在基于文件的存储上,使用 qcow2 格式可以实现快照功能。
3: It is possible to use LVM on top of an iSCSI or FC-based storage.
That way you get a shared LVM storage
3 :可以在基于 iSCSI 或 FC 的存储之上使用 LVM。这样你就得到了一个共享的 LVM 存储。
7.1.1. Thin Provisioning
7.1.1. 精简配置
A number of storages, and the QEMU image format qcow2, support thin
provisioning. With thin provisioning activated, only the blocks that
the guest system actually use will be written to the storage.
许多存储类型以及 QEMU 镜像格式 qcow2 都支持精简配置。启用精简配置后,只有客户系统实际使用的块才会写入存储。
Say for instance you create a VM with a 32GB hard disk, and after
installing the guest system OS, the root file system of the VM contains
3 GB of data. In that case only 3GB are written to the storage, even
if the guest VM sees a 32GB hard drive. In this way thin provisioning
allows you to create disk images which are larger than the currently
available storage blocks. You can create large disk images for your
VMs, and when the need arises, add more disks to your storage without
resizing the VMs' file systems.
例如,假设你创建了一个带有 32GB 硬盘的虚拟机,安装客户系统操作系统后,虚拟机的根文件系统包含 3GB 数据。在这种情况下,只有 3GB 数据被写入存储,即使客户虚拟机看到的是一个 32GB 的硬盘。通过这种方式,精简配置允许你创建比当前可用存储块更大的磁盘镜像。你可以为虚拟机创建大容量磁盘镜像,并在需要时向存储添加更多硬盘,而无需调整虚拟机文件系统的大小。
All storage types which have the “Snapshots” feature also support thin
provisioning.
所有具有“快照”功能的存储类型也支持精简配置。
|
|
If a storage runs full, all guests using volumes on that
storage receive IO errors. This can cause file system inconsistencies
and may corrupt your data. So it is advisable to avoid
over-provisioning of your storage resources, or carefully observe
free space to avoid such conditions. 如果存储空间用满,所有使用该存储卷的虚拟机都会收到 IO 错误。这可能导致文件系统不一致,并可能损坏您的数据。因此,建议避免过度配置存储资源,或仔细监控剩余空间以避免此类情况。 |
7.2. Storage Configuration
7.2. 存储配置
All Proxmox VE related storage configuration is stored within a single text
file at /etc/pve/storage.cfg. As this file is within /etc/pve/, it
gets automatically distributed to all cluster nodes. So all nodes
share the same storage configuration.
所有 Proxmox VE 相关的存储配置都存储在单个文本文件 /etc/pve/storage.cfg 中。由于该文件位于 /etc/pve/ 目录下,它会自动分发到所有集群节点。因此,所有节点共享相同的存储配置。
Sharing storage configuration makes perfect sense for shared storage,
because the same “shared” storage is accessible from all nodes. But it is
also useful for local storage types. In this case such local storage
is available on all nodes, but it is physically different and can have
totally different content.
共享存储配置对于共享存储来说非常合理,因为相同的“共享”存储可以从所有节点访问。但这对于本地存储类型也很有用。在这种情况下,这些本地存储在所有节点上都可用,但它们在物理上是不同的,内容也可能完全不同。
7.2.1. Storage Pools 7.2.1. 存储池
Each storage pool has a <type>, and is uniquely identified by its
<STORAGE_ID>. A pool configuration looks like this:
每个存储池都有一个<type>,并通过其<STORAGE_ID>唯一标识。一个存储池配置如下所示:
<type>: <STORAGE_ID>
<property> <value>
<property> <value>
<property>
...
The <type>: <STORAGE_ID> line starts the pool definition, which is then
followed by a list of properties. Most properties require a value. Some have
reasonable defaults, in which case you can omit the value.
<type>: <STORAGE_ID> 行开始定义存储池,随后是一系列属性列表。大多数属性需要一个值。有些属性有合理的默认值,在这种情况下可以省略该值。
To be more specific, take a look at the default storage configuration
after installation. It contains one special local storage pool named
local, which refers to the directory /var/lib/vz and is always
available. The Proxmox VE installer creates additional storage entries
depending on the storage type chosen at installation time.
具体来说,看看安装后的默认存储配置。它包含一个名为 local 的特殊本地存储池,指向目录/var/lib/vz,并且始终可用。Proxmox VE 安装程序会根据安装时选择的存储类型创建额外的存储条目。
默认存储配置(/etc/pve/storage.cfg)
dir: local
path /var/lib/vz
content iso,vztmpl,backup
# default image store on LVM based installation
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
# default image store on ZFS based installation
zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir
|
|
It is problematic to have multiple storage configurations pointing to
the exact same underlying storage. Such an aliased storage configuration can
lead to two different volume IDs (volid) pointing to the exact same disk
image. Proxmox VE expects that the images' volume IDs point to, are unique. Choosing
different content types for aliased storage configurations can be fine, but
is not recommended. 多个存储配置指向完全相同的底层存储是有问题的。这样的别名存储配置可能导致两个不同的卷 ID(volid)指向完全相同的磁盘镜像。Proxmox VE 期望镜像的卷 ID 是唯一指向的。为别名存储配置选择不同的内容类型可以,但不推荐这样做。 |
7.2.2. Common Storage Properties
7.2.2. 常见存储属性
A few storage properties are common among different storage types.
一些存储属性在不同的存储类型中是通用的。
- nodes 节点
-
List of cluster node names where this storage is usable/accessible. One can use this property to restrict storage access to a limited set of nodes.
集群中此存储可用/可访问的节点名称列表。可以使用此属性将存储访问限制在有限的节点集合中。 - content 内容
-
A storage can support several content types, for example virtual disk images, cdrom iso images, container templates or container root directories. Not all storage types support all content types. One can set this property to select what this storage is used for.
存储可以支持多种内容类型,例如虚拟磁盘映像、CD-ROM ISO 映像、容器模板或容器根目录。并非所有存储类型都支持所有内容类型。可以设置此属性以选择此存储的用途。- images 镜像
-
QEMU/KVM VM images. QEMU/KVM 虚拟机镜像。
- rootdir 根目录
-
Allow to store container data.
允许存储容器数据。 - vztmpl
-
Container templates. 容器模板。
- backup
-
Backup files (vzdump).
备份文件(vzdump)。 - iso
-
ISO images ISO 镜像
- snippets 代码片段
-
Snippet files, for example guest hook scripts
代码片段文件,例如客户机钩子脚本
- shared 共享
-
Indicate that this is a single storage with the same contents on all nodes (or all listed in the nodes option). It will not make the contents of a local storage automatically accessible to other nodes, it just marks an already shared storage as such!
表示这是一个在所有节点(或节点选项中列出的所有节点)上内容相同的单一存储。它不会自动使本地存储的内容对其他节点可访问,只是将已经共享的存储标记为共享! - disable 禁用
-
You can use this flag to disable the storage completely.
您可以使用此标志来完全禁用该存储。 - maxfiles
-
Deprecated, please use prune-backups instead. Maximum number of backup files per VM. Use 0 for unlimited.
已弃用,请改用 prune-backups。每个虚拟机的最大备份文件数。使用 0 表示无限制。 - prune-backups
-
Retention options for backups. For details, see Backup Retention.
备份的保留选项。详情请参见备份保留。 - format 格式
-
Default image format (raw|qcow2|vmdk)
默认镜像格式(raw|qcow2|vmdk) - preallocation 预分配
-
Preallocation mode (off|metadata|falloc|full) for raw and qcow2 images on file-based storages. The default is metadata, which is treated like off for raw images. When using network storages in combination with large qcow2 images, using off can help to avoid timeouts.
针对基于文件存储的 raw 和 qcow2 镜像的预分配模式(off|metadata|falloc|full)。默认值为 metadata,对于 raw 镜像则视同 off。当在网络存储中使用大型 qcow2 镜像时,选择 off 可以帮助避免超时。
|
|
It is not advisable to use the same storage pool on different
Proxmox VE clusters. Some storage operation need exclusive access to the
storage, so proper locking is required. While this is implemented
within a cluster, it does not work between different clusters. 不建议在不同的 Proxmox VE 集群上使用相同的存储池。某些存储操作需要对存储进行独占访问,因此需要适当的锁定。虽然这在集群内部已经实现,但在不同集群之间并不适用。 |
7.3. Volumes 7.3. 卷
We use a special notation to address storage data. When you allocate
data from a storage pool, it returns such a volume identifier. A volume
is identified by the <STORAGE_ID>, followed by a storage type
dependent volume name, separated by colon. A valid <VOLUME_ID> looks
like:
我们使用一种特殊的表示法来定位存储数据。当您从存储池分配数据时,它会返回这样的卷标识符。一个卷由 <STORAGE_ID> 标识,后跟存储类型相关的卷名称,两者之间用冒号分隔。一个有效的 <VOLUME_ID> 看起来像:
local:230/example-image.raw
local:iso/debian-501-amd64-netinst.iso
local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
To get the file system path for a <VOLUME_ID> use:
要获取 <VOLUME_ID> 的文件系统路径,请使用:
pvesm path <VOLUME_ID>
7.3.1. Volume Ownership 7.3.1. 卷所有权
There exists an ownership relation for image type volumes. Each such
volume is owned by a VM or Container. For example volume
local:230/example-image.raw is owned by VM 230. Most storage
backends encodes this ownership information into the volume name.
图像类型卷存在所有权关系。每个此类卷都归某个虚拟机或容器所有。例如,卷 local:230/example-image.raw 归虚拟机 230 所有。大多数存储后端会将此所有权信息编码到卷名称中。
When you remove a VM or Container, the system also removes all
associated volumes which are owned by that VM or Container.
当您删除虚拟机或容器时,系统也会删除该虚拟机或容器所拥有的所有相关卷。
7.4. Using the Command-line Interface
7.4. 使用命令行界面
It is recommended to familiarize yourself with the concept behind storage
pools and volume identifiers, but in real life, you are not forced to do any
of those low level operations on the command line. Normally,
allocation and removal of volumes is done by the VM and Container
management tools.
建议熟悉存储池和卷标识符背后的概念,但在实际操作中,您并不需要强制在命令行上执行这些底层操作。通常,卷的分配和移除是由虚拟机和容器管理工具完成的。
Nevertheless, there is a command-line tool called pvesm (“Proxmox VE
Storage Manager”), which is able to perform common storage management
tasks.
不过,有一个名为 pvesm(“Proxmox VE 存储管理器”)的命令行工具,可以执行常见的存储管理任务。
7.4.1. Examples 7.4.1. 示例
Add storage pools 添加存储池
pvesm add <TYPE> <STORAGE_ID> <OPTIONS> pvesm add dir <STORAGE_ID> --path <PATH> pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT> pvesm add lvm <STORAGE_ID> --vgname <VGNAME> pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
Disable storage pools 禁用存储池
pvesm set <STORAGE_ID> --disable 1
Enable storage pools 启用存储池
pvesm set <STORAGE_ID> --disable 0
Change/set storage options
更改/设置存储选项
pvesm set <STORAGE_ID> <OPTIONS> pvesm set <STORAGE_ID> --shared 1 pvesm set local --format qcow2 pvesm set <STORAGE_ID> --content iso
Remove storage pools. This does not delete any data, and does not
disconnect or unmount anything. It just removes the storage
configuration.
移除存储池。这不会删除任何数据,也不会断开连接或卸载任何内容。它只是移除存储配置。
pvesm remove <STORAGE_ID>
Allocate volumes 分配卷
pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
Allocate a 4G volume in local storage. The name is auto-generated if
you pass an empty string as <name>
在本地存储中分配一个 4G 的卷。如果将<name>传递为空字符串,名称将自动生成
pvesm alloc local <VMID> '' 4G
Free volumes 释放卷
pvesm free <VOLUME_ID>
|
|
This really destroys all volume data. 这将真正销毁所有卷数据。 |
List storage status 列出存储状态
pvesm status
List storage contents 列出存储内容
pvesm list <STORAGE_ID> [--vmid <VMID>]
List volumes allocated by VMID
列出由 VMID 分配的卷
pvesm list <STORAGE_ID> --vmid <VMID>
List iso images 列出 ISO 镜像
pvesm list <STORAGE_ID> --content iso
List container templates 列出容器模板
pvesm list <STORAGE_ID> --content vztmpl
Show file system path for a volume
显示卷的文件系统路径
pvesm path <VOLUME_ID>
Exporting the volume local:103/vm-103-disk-0.qcow2 to the file target.
This is mostly used internally with pvesm import.
The stream format qcow2+size is different to the qcow2 format.
Consequently, the exported file cannot simply be attached to a VM.
This also holds for the other formats.
将卷 local:103/vm-103-disk-0.qcow2 导出到文件 target。此操作主要在内部与 pvesm import 一起使用。流格式 qcow2+size 与 qcow2 格式不同。因此,导出的文件不能简单地附加到虚拟机。这对于其他格式同样适用。
pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
7.5. Directory Backend 7.5. 目录后端
Storage pool type: dir 存储池类型:dir
Proxmox VE can use local directories or locally mounted shares for
storage. A directory is a file level storage, so you can store any
content type like virtual disk images, containers, templates, ISO images
or backup files.
Proxmox VE 可以使用本地目录或本地挂载的共享存储。目录是一种文件级存储,因此您可以存储任何类型的内容,如虚拟磁盘镜像、容器、模板、ISO 镜像或备份文件。
|
|
You can mount additional storages via standard linux /etc/fstab,
and then define a directory storage for that mount point. This way you
can use any file system supported by Linux. 您可以通过标准的 Linux /etc/fstab 挂载额外的存储,然后为该挂载点定义一个目录存储。这样您就可以使用 Linux 支持的任何文件系统。 |
This backend assumes that the underlying directory is POSIX
compatible, but nothing else. This implies that you cannot create
snapshots at the storage level. But there exists a workaround for VM
images using the qcow2 file format, because that format supports
snapshots internally.
该后端假设底层目录是 POSIX 兼容的,但不做其他假设。这意味着您无法在存储级别创建快照。但对于使用 qcow2 文件格式的虚拟机镜像,有一种变通方法,因为该格式内部支持快照。
|
|
Some storage types do not support O_DIRECT, so you can’t use
cache mode none with such storages. Simply use cache mode
writeback instead. 某些存储类型不支持 O_DIRECT,因此您无法在此类存储上使用缓存模式 none。只需改用缓存模式 writeback 即可。 |
We use a predefined directory layout to store different content types
into different sub-directories. This layout is used by all file level
storage backends.
我们使用预定义的目录布局将不同的内容类型存储到不同的子目录中。所有文件级存储后端都使用此布局。
| Content type 内容类型 | Subdir 子目录 |
|---|---|
VM images 虚拟机镜像 |
images/<VMID>/ |
ISO images ISO 镜像 |
template/iso/ 模板/iso/ |
Container templates 容器模板 |
template/cache/ 模板/缓存/ |
Backup files 备份文件 |
dump/ |
Snippets 代码片段 |
snippets/ |
7.5.1. Configuration 7.5.1. 配置
This backend supports all common storage properties, and adds two
additional properties. The path property is used to specify the
directory. This needs to be an absolute file system path.
该后端支持所有常见的存储属性,并添加了两个额外的属性。path 属性用于指定目录。该路径需要是一个绝对文件系统路径。
The optional content-dirs property allows for the default layout
to be changed. It consists of a comma-separated list of identifiers
in the following format:
可选的 content-dirs 属性允许更改默认布局。它由以下格式的逗号分隔标识符列表组成:
vtype=path
Where vtype is one of the allowed content types for the storage, and
path is a path relative to the mountpoint of the storage.
其中 vtype 是存储允许的内容类型之一,path 是相对于存储挂载点的路径。
配置示例(/etc/pve/storage.cfg)
dir: backup
path /mnt/backup
content backup
prune-backups keep-last=7
max-protected-backups 3
content-dirs backup=custom/backup/dir
The above configuration defines a storage pool called backup. That pool can be
used to store up to 7 regular backups (keep-last=7) and 3 protected backups
per VM. The real path for the backup files is /mnt/backup/custom/backup/dir/....
上述配置定义了一个名为 backup 的存储池。该存储池可用于存储每个虚拟机最多 7 个常规备份(keep-last=7)和 3 个受保护备份。备份文件的实际路径是 /mnt/backup/custom/backup/dir/....
7.5.2. File naming conventions
7.5.2. 文件命名约定
This backend uses a well defined naming scheme for VM images:
该后端使用一个明确的命名方案来命名虚拟机镜像:
vm-<VMID>-<NAME>.<FORMAT>
- <VMID>
-
This specifies the owner VM.
这指定了所有者虚拟机。 - <NAME>
-
This can be an arbitrary name (ascii) without white space. The backend uses disk-[N] as default, where [N] is replaced by an integer to make the name unique.
这可以是一个任意名称(ASCII),且不包含空白字符。后端默认使用 disk-[N],其中 [N] 会被替换为一个整数以确保名称唯一。 - <FORMAT>
-
Specifies the image format (raw|qcow2|vmdk).
指定镜像格式(raw|qcow2|vmdk)。
When you create a VM template, all VM images are renamed to indicate
that they are now read-only, and can be used as a base image for clones:
当您创建虚拟机模板时,所有虚拟机镜像都会被重命名,以表明它们现在是只读的,并且可以用作克隆的基础镜像:
base-<VMID>-<NAME>.<FORMAT>
|
|
Such base images are used to generate cloned images. So it is
important that those files are read-only, and never get modified. The
backend changes the access mode to 0444, and sets the immutable flag
(chattr +i) if the storage supports that. 这样的基础镜像用于生成克隆镜像。因此,这些文件必须是只读的,且绝不能被修改。如果存储支持,后端会将访问模式更改为 0444,并设置不可变标志(chattr +i)。 |
7.5.3. Storage Features 7.5.3. 存储功能
As mentioned above, most file systems do not support snapshots out
of the box. To workaround that problem, this backend is able to use
qcow2 internal snapshot capabilities.
如上所述,大多数文件系统默认不支持快照。为了解决这个问题,该后端能够使用 qcow2 的内部快照功能。
Same applies to clones. The backend uses the qcow2 base image
feature to create clones.
克隆也是同样的情况。该后端使用 qcow2 基础镜像功能来创建克隆。
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images rootdir vztmpl iso backup snippets |
raw qcow2 vmdk subvol |
no 否 |
qcow2 |
qcow2 |
7.5.4. Examples 7.5.4. 示例
Please use the following command to allocate a 4GB image on storage local:
请使用以下命令在本地存储上分配一个 4GB 的镜像:
# pvesm alloc local 100 vm-100-disk10.raw 4G Formatting '/var/lib/vz/images/100/vm-100-disk10.raw', fmt=raw size=4294967296 successfully created 'local:100/vm-100-disk10.raw'
|
|
The image name must conform to above naming conventions. 镜像名称必须符合上述命名规范。 |
The real file system path is shown with:
真实的文件系统路径显示为:
# pvesm path local:100/vm-100-disk10.raw /var/lib/vz/images/100/vm-100-disk10.raw
And you can remove the image with:
您可以使用以下命令删除镜像:
# pvesm free local:100/vm-100-disk10.raw
7.6. NFS Backend 7.6. NFS 后端
Storage pool type: nfs 存储池类型:nfs
The NFS backend is based on the directory backend, so it shares most
properties. The directory layout and the file naming conventions are
the same. The main advantage is that you can directly configure the
NFS server properties, so the backend can mount the share
automatically. There is no need to modify /etc/fstab. The backend
can also test if the server is online, and provides a method to query
the server for exported shares.
NFS 后端基于目录后端,因此它们共享大多数属性。目录布局和文件命名约定相同。主要优点是您可以直接配置 NFS 服务器属性,因此后端可以自动挂载共享。无需修改 /etc/fstab。后端还可以测试服务器是否在线,并提供一种方法来查询服务器导出的共享。
7.6.1. Configuration 7.6.1. 配置
The backend supports all common storage properties, except the shared
flag, which is always set. Additionally, the following properties are
used to configure the NFS server:
后端支持所有常见的存储属性,除了共享标志,该标志始终被设置。此外,以下属性用于配置 NFS 服务器:
- server
-
Server IP or DNS name. To avoid DNS lookup delays, it is usually preferable to use an IP address instead of a DNS name - unless you have a very reliable DNS server, or list the server in the local /etc/hosts file.
服务器 IP 或 DNS 名称。为了避免 DNS 查询延迟,通常建议使用 IP 地址而非 DNS 名称——除非您有非常可靠的 DNS 服务器,或者在本地 /etc/hosts 文件中列出了该服务器。 - export 导出路径
-
NFS export path (as listed by pvesm nfsscan).
NFS 导出路径(由 pvesm nfsscan 列出)。
You can also set NFS mount options:
您还可以设置 NFS 挂载选项:
- path 路径
-
The local mount point (defaults to /mnt/pve/<STORAGE_ID>/).
本地挂载点(默认为 /mnt/pve/<STORAGE_ID>/)。 - content-dirs 内容目录
-
Overrides for the default directory layout. Optional.
覆盖默认的目录布局。可选。 - options 选项
-
NFS mount options (see man nfs).
NFS 挂载选项(参见 man nfs)。
配置示例(/etc/pve/storage.cfg)
nfs: iso-templates
path /mnt/pve/iso-templates
server 10.0.0.10
export /space/iso-templates
options vers=3,soft
content iso,vztmpl
|
|
After an NFS request times out, NFS request are retried
indefinitely by default. This can lead to unexpected hangs on the
client side. For read-only content, it is worth to consider the NFS
soft option, which limits the number of retries to three. NFS 请求超时后,默认情况下会无限次重试。这可能导致客户端出现意外挂起。对于只读内容,建议考虑使用 NFS 的 soft 选项,该选项将重试次数限制为三次。 |
7.6.2. Storage Features 7.6.2. 存储功能
NFS does not support snapshots, but the backend uses qcow2 features
to implement snapshots and cloning.
NFS 不支持快照,但后端使用 qcow2 功能来实现快照和克隆。
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images rootdir vztmpl iso backup snippets |
raw qcow2 vmdk |
yes 是 |
qcow2 |
qcow2 |
7.7. CIFS Backend 7.7. CIFS 后端
Storage pool type: cifs 存储池类型:cifs
The CIFS backend extends the directory backend, so that no manual
setup of a CIFS mount is needed. Such a storage can be added directly
through the Proxmox VE API or the web UI, with all our backend advantages,
like server heartbeat check or comfortable selection of exported
shares.
CIFS 后端扩展了目录后端,因此无需手动设置 CIFS 挂载。此类存储可以直接通过 Proxmox VE API 或网页界面添加,具备所有后端优势,如服务器心跳检测或便捷选择导出共享。
7.7.1. Configuration 7.7.1. 配置
The backend supports all common storage properties, except the shared
flag, which is always set. Additionally, the following CIFS special
properties are available:
该后端支持所有常见的存储属性,除了共享标志,该标志始终被设置。此外,还提供以下 CIFS 特殊属性:
- server 服务器
-
Server IP or DNS name. Required.
服务器 IP 或 DNS 名称。必填。
|
|
To avoid DNS lookup delays, it is usually preferable to use an IP
address instead of a DNS name - unless you have a very reliable DNS
server, or list the server in the local /etc/hosts file. 为了避免 DNS 查询延迟,通常建议使用 IP 地址而非 DNS 名称——除非您有非常可靠的 DNS 服务器,或者在本地 /etc/hosts 文件中列出了该服务器。 |
- share 共享
-
CIFS share to use (get available ones with pvesm scan cifs <address> or the web UI). Required.
要使用的 CIFS 共享(可通过 pvesm scan cifs <address> 或网页界面获取可用共享)。必填。 - username 用户名
-
The username for the CIFS storage. Optional, defaults to ‘guest’.
CIFS 存储的用户名。可选,默认为“guest”。 - password 密码
-
The user password. Optional. It will be saved in a file only readable by root (/etc/pve/priv/storage/<STORAGE-ID>.pw).
用户密码。可选。它将被保存在只有 root 可读的文件中(/etc/pve/priv/storage/<STORAGE-ID>.pw)。 - domain 域
-
Sets the user domain (workgroup) for this storage. Optional.
为此存储设置用户域(工作组)。可选。 - smbversion
-
SMB protocol Version. Optional, default is 3. SMB1 is not supported due to security issues.
SMB 协议版本。可选,默认是 3。由于安全问题,不支持 SMB1。 - path 路径
-
The local mount point. Optional, defaults to /mnt/pve/<STORAGE_ID>/.
本地挂载点。可选,默认是 /mnt/pve/<STORAGE_ID>/。 - content-dirs 内容目录
-
Overrides for the default directory layout. Optional.
覆盖默认目录布局。可选。 - options 选项
-
Additional CIFS mount options (see man mount.cifs). Some options are set automatically and shouldn’t be set here. Proxmox VE will always set the option soft. Depending on the configuration, these options are set automatically: username, credentials, guest, domain, vers.
额外的 CIFS 挂载选项(参见 man mount.cifs)。某些选项会自动设置,不应在此处设置。Proxmox VE 始终会设置 soft 选项。根据配置,以下选项会自动设置:username、credentials、guest、domain、vers。 - subdir 子目录
-
The subdirectory of the share to mount. Optional, defaults to the root directory of the share.
要挂载的共享子目录。可选,默认为共享的根目录。
配置示例(/etc/pve/storage.cfg)
cifs: backup
path /mnt/pve/backup
server 10.0.0.11
share VMData
content backup
options noserverino,echo_interval=30
username anna
smbversion 3
subdir /data
7.7.2. Storage Features 7.7.2. 存储功能
CIFS does not support snapshots on a storage level. But you may use
qcow2 backing files if you still want to have snapshots and cloning
features available.
CIFS 不支持存储级别的快照。但如果您仍希望使用快照和克隆功能,可以使用 qcow2 后端文件。
| Content types 内容类型 | Image formats 图像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images rootdir vztmpl iso backup snippets |
raw qcow2 vmdk |
yes 是 |
qcow2 |
qcow2 |
7.7.3. Examples 7.7.3. 示例
You can get a list of exported CIFS shares with:
您可以使用以下命令获取导出的 CIFS 共享列表:
# pvesm scan cifs <server> [--username <username>] [--password]
Then you can add one of these shares as a storage to the whole Proxmox VE cluster
with:
然后,您可以将其中一个共享作为存储添加到整个 Proxmox VE 集群中:
# pvesm add cifs <storagename> --server <server> --share <share> [--username <username>] [--password]
7.8. Proxmox Backup Server
7.8. Proxmox 备份服务器
Storage pool type: pbs 存储池类型:pbs
This backend allows direct integration of a Proxmox Backup Server into Proxmox VE
like any other storage.
A Proxmox Backup storage can be added directly through the Proxmox VE API, CLI or
the web interface.
该后端允许将 Proxmox 备份服务器像其他存储一样直接集成到 Proxmox VE 中。Proxmox 备份存储可以通过 Proxmox VE 的 API、命令行界面或网页界面直接添加。
7.8.1. Configuration 7.8.1. 配置
The backend supports all common storage properties, except the shared flag,
which is always set. Additionally, the following special properties to Proxmox
Backup Server are available:
该后端支持所有常见的存储属性,除了共享标志,该标志始终被设置。此外,还提供了以下针对 Proxmox 备份服务器的特殊属性:
- server
-
Server IP or DNS name. Required.
服务器 IP 或 DNS 名称。必填。 - port 端口
-
Use this port instead of the default one, i.e. 8007. Optional.
使用此端口替代默认端口,即 8007。可选。 - username 用户名
-
The username for the Proxmox Backup Server storage. Required.
Proxmox 备份服务器存储的用户名。必填。
|
|
Do not forget to add the realm to the username. For example, root@pam or
archiver@pbs. 不要忘记在用户名中添加域。例如,root@pam 或 archiver@pbs。 |
- password 密码
-
The user password. The value will be saved in a file under /etc/pve/priv/storage/<STORAGE-ID>.pw with access restricted to the root user. Required.
用户密码。该值将保存在 /etc/pve/priv/storage/<STORAGE-ID>.pw 文件中,访问权限仅限于 root 用户。必填。 - datastore 数据存储
-
The ID of the Proxmox Backup Server datastore to use. Required.
要使用的 Proxmox 备份服务器数据存储的 ID。必填。 - fingerprint 指纹
-
The fingerprint of the Proxmox Backup Server API TLS certificate. You can get it in the Servers Dashboard or using the proxmox-backup-manager cert info command. Required for self-signed certificates or any other one where the host does not trusts the servers CA.
Proxmox 备份服务器 API TLS 证书的指纹。您可以在服务器仪表板中获取,或使用 proxmox-backup-manager cert info 命令获取。对于自签名证书或任何主机不信任服务器 CA 的证书,必填。 - encryption-key
-
A key to encrypt the backup data from the client side. Currently only non-password protected (no key derive function (kdf)) are supported. Will be saved in a file under /etc/pve/priv/storage/<STORAGE-ID>.enc with access restricted to the root user. Use the magic value autogen to automatically generate a new one using proxmox-backup-client key create --kdf none <path>. Optional.
用于从客户端加密备份数据的密钥。目前仅支持无密码保护(无密钥派生函数(kdf))的密钥。将保存在 /etc/pve/priv/storage/<STORAGE-ID>.enc 文件中,访问权限仅限 root 用户。使用特殊值 autogen 可通过 proxmox-backup-client key create --kdf none <path> 自动生成新的密钥。可选。 - master-pubkey
-
A public RSA key used to encrypt the backup encryption key as part of the backup task. Will be saved in a file under /etc/pve/priv/storage/<STORAGE-ID>.master.pem with access restricted to the root user. The encrypted copy of the backup encryption key will be appended to each backup and stored on the Proxmox Backup Server instance for recovery purposes. Optional, requires encryption-key.
用于加密备份加密密钥的公用 RSA 密钥,作为备份任务的一部分。将保存在 /etc/pve/priv/storage/<STORAGE-ID>.master.pem 文件中,访问权限仅限 root 用户。备份加密密钥的加密副本将附加到每个备份中,并存储在 Proxmox Backup Server 实例上以便恢复。可选,需配合 encryption-key 使用。
配置示例(/etc/pve/storage.cfg)
pbs: backup
datastore main
server enya.proxmox.com
content backup
fingerprint 09:54:ef:..snip..:88:af:47:fe:4c:3b:cf:8b:26:88:0b:4e:3c:b2
prune-backups keep-all=1
username archiver@pbs
encryption-key a9:ee:c8:02:13:..snip..:2d:53:2c:98
master-pubkey 1
7.8.2. Storage Features 7.8.2. 存储功能
Proxmox Backup Server only supports backups, they can be block-level or
file-level based. Proxmox VE uses block-level for virtual machines and file-level for
container.
Proxmox Backup Server 仅支持备份,备份可以是基于区块级别或文件级别的。Proxmox VE 对虚拟机使用区块级备份,对容器使用文件级备份。
| Content types 内容类型 | Image formats 图像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
backup 备份 |
n/a 不适用 |
yes 是 |
n/a 无 |
n/a 无 |
7.8.3. Encryption 7.8.3. 加密
Optionally, you can configure client-side encryption with AES-256 in GCM mode.
Encryption can be configured either via the web interface, or on the CLI with
the encryption-key option (see above). The key will be saved in the file
/etc/pve/priv/storage/<STORAGE-ID>.enc, which is only accessible by the root
user.
您可以选择配置客户端使用 AES-256 GCM 模式进行加密。加密可以通过网页界面配置,也可以在命令行界面使用 encryption-key 选项配置(见上文)。密钥将保存在 /etc/pve/priv/storage/<STORAGE-ID>.enc 文件中,该文件仅 root 用户可访问。
|
|
Without their key, backups will be inaccessible. Thus, you should
keep keys ordered and in a place that is separate from the contents being
backed up. It can happen, for example, that you back up an entire system, using
a key on that system. If the system then becomes inaccessible for any reason
and needs to be restored, this will not be possible as the encryption key will be
lost along with the broken system. 没有密钥,备份将无法访问。因此,您应将密钥有序保存,并放置在与备份内容分开的地方。例如,您可能会备份整个系统,使用该系统上的密钥。如果该系统因任何原因变得无法访问并需要恢复,则无法完成恢复,因为加密密钥会随损坏的系统一起丢失。 |
It is recommended that you keep your key safe, but easily accessible, in
order for quick disaster recovery. For this reason, the best place to store it
is in your password manager, where it is immediately recoverable. As a backup to
this, you should also save the key to a USB flash drive and store that in a secure
place. This way, it is detached from any system, but is still easy to recover
from, in case of emergency. Finally, in preparation for the worst case scenario,
you should also consider keeping a paper copy of your key locked away in a safe
place. The paperkey subcommand can be used to create a QR encoded version of
your key. The following command sends the output of the paperkey command to
a text file, for easy printing.
建议您将密钥妥善保管,但又能方便访问,以便快速进行灾难恢复。因此,存放密钥的最佳位置是密码管理器中,便于立即恢复。作为备份,您还应将密钥保存到 USB 闪存驱动器中,并将其存放在安全的地方。这样,密钥与任何系统分离,但在紧急情况下仍易于恢复。最后,为了应对最坏的情况,您还应考虑将密钥的纸质副本锁在安全的地方。paperkey 子命令可用于创建密钥的二维码编码版本。以下命令将 paperkey 命令的输出发送到文本文件,便于打印。
# proxmox-backup-client key paperkey /etc/pve/priv/storage/<STORAGE-ID>.enc --output-format text > qrkey.txt
Additionally, it is possible to use a single RSA master key pair for key
recovery purposes: configure all clients doing encrypted backups to use a
single public master key, and all subsequent encrypted backups will contain a
RSA-encrypted copy of the used AES encryption key. The corresponding private
master key allows recovering the AES key and decrypting the backup even if the
client system is no longer available.
此外,可以使用单个 RSA 主密钥对进行密钥恢复:配置所有执行加密备份的客户端使用单个公共主密钥,所有后续的加密备份将包含所使用的 AES 加密密钥的 RSA 加密副本。相应的私有主密钥允许恢复 AES 密钥并解密备份,即使客户端系统不再可用。
|
|
The same safe-keeping rules apply to the master key pair as to the
regular encryption keys. Without a copy of the private key recovery is not
possible! The paperkey command supports generating paper copies of private
master keys for storage in a safe, physical location. 主密钥对的保管规则与常规加密密钥相同。没有私钥的副本,恢复是不可能的!paperkey 命令支持生成私有主密钥的纸质副本,以便存放在安全的物理位置。 |
Because the encryption is managed on the client side, you can use the same
datastore on the server for unencrypted backups and encrypted backups, even
if they are encrypted with different keys. However, deduplication between
backups with different keys is not possible, so it is often better to create
separate datastores.
由于加密是在客户端管理的,您可以在服务器上使用相同的数据存储来存放未加密备份和加密备份,即使它们使用不同的密钥加密。然而,不同密钥加密的备份之间无法进行重复数据删除,因此通常更好地创建单独的数据存储。
|
|
Do not use encryption if there is no benefit from it, for example, when
you are running the server locally in a trusted network. It is always easier to
recover from unencrypted backups. 如果加密没有带来好处,例如当您在受信任的网络中本地运行服务器时,请不要使用加密。恢复未加密备份总是更容易。 |
7.8.4. Example: Add Storage over CLI
7.8.4. 示例:通过命令行添加存储
You can get a list of available Proxmox Backup Server datastores with:
您可以使用以下命令获取可用的 Proxmox Backup Server 数据存储列表:
# pvesm scan pbs <server> <username> [--password <string>] [--fingerprint <string>]
Then you can add one of these datastores as a storage to the whole Proxmox VE
cluster with:
然后,您可以使用以下命令将其中一个数据存储添加为整个 Proxmox VE 集群的存储:
# pvesm add pbs <id> --server <server> --datastore <datastore> --username <username> --fingerprint 00:B4:... --password
7.9. GlusterFS Backend 7.9. GlusterFS 后端
Storage pool type: glusterfs
存储池类型:glusterfs
GlusterFS is a scalable network file system. The system uses a modular
design, runs on commodity hardware, and can provide a highly available
enterprise storage at low costs. Such system is capable of scaling to
several petabytes, and can handle thousands of clients.
GlusterFS 是一个可扩展的网络文件系统。该系统采用模块化设计,运行在通用硬件上,能够以低成本提供高可用的企业级存储。该系统能够扩展到数拍字节,并能处理数千个客户端。
|
|
After a node/brick crash, GlusterFS does a full rsync to make
sure data is consistent. This can take a very long time with large
files, so this backend is not suitable to store large VM images. 在节点/砖块崩溃后,GlusterFS 会执行完整的 rsync 以确保数据一致性。对于大文件,这可能需要很长时间,因此该后端不适合存储大型虚拟机镜像。 |
7.9.1. Configuration 7.9.1. 配置
The backend supports all common storage properties, and adds the
following GlusterFS specific options:
后端支持所有常见的存储属性,并添加了以下 GlusterFS 特定选项:
- server
-
GlusterFS volfile server IP or DNS name.
GlusterFS volfile 服务器的 IP 或 DNS 名称。 - server2
-
Backup volfile server IP or DNS name.
备份 volfile 服务器的 IP 或 DNS 名称。 - volume 卷
-
GlusterFS Volume. GlusterFS 卷。
- transport 传输
-
GlusterFS transport: tcp, unix or rdma
GlusterFS 传输:tcp、unix 或 rdma
配置示例(/etc/pve/storage.cfg)
glusterfs: Gluster
server 10.2.3.4
server2 10.2.3.5
volume glustervol
content images,iso
7.9.2. File naming conventions
7.9.2. 文件命名规范
The directory layout and the file naming conventions are inherited
from the dir backend.
目录布局和文件命名规范继承自 dir 后端。
7.9.3. Storage Features 7.9.3. 存储功能
The storage provides a file level interface, but no native
snapshot/clone implementation.
该存储提供文件级接口,但没有原生的快照/克隆实现。
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images vztmpl iso backup snippets |
raw qcow2 vmdk |
yes 是 |
qcow2 |
qcow2 |
7.10. Local ZFS Pool Backend
7.10. 本地 ZFS 池后端
Storage pool type: zfspool
存储池类型:zfspool
This backend allows you to access local ZFS pools (or ZFS file systems
inside such pools).
该后端允许您访问本地 ZFS 池(或这些池内的 ZFS 文件系统)。
7.10.1. Configuration 7.10.1. 配置
The backend supports the common storage properties content, nodes,
disable, and the following ZFS specific properties:
该后端支持常见的存储属性 content、nodes、disable,以及以下特定于 ZFS 的属性:
- pool 存储池
-
Select the ZFS pool/filesystem. All allocations are done within that pool.
选择 ZFS 存储池/文件系统。所有分配都在该存储池内进行。 - blocksize 块大小
-
Set ZFS blocksize parameter.
设置 ZFS 块大小参数。 - sparse 稀疏
-
Use ZFS thin-provisioning. A sparse volume is a volume whose reservation is not equal to the volume size.
使用 ZFS 细分配。稀疏卷是指其保留空间不等于卷大小的卷。 - mountpoint 挂载点
-
The mount point of the ZFS pool/filesystem. Changing this does not affect the mountpoint property of the dataset seen by zfs. Defaults to /<pool>.
ZFS 池/文件系统的挂载点。更改此项不会影响 zfs 所见数据集的挂载点属性。默认值为 /<pool>。
配置示例(/etc/pve/storage.cfg)
zfspool: vmdata
pool tank/vmdata
content rootdir,images
sparse
7.10.2. File naming conventions
7.10.2. 文件命名规范
The backend uses the following naming scheme for VM images:
后端使用以下命名方案来命名虚拟机镜像:
vm-<VMID>-<NAME> // normal VM images base-<VMID>-<NAME> // template VM image (read-only) subvol-<VMID>-<NAME> // subvolumes (ZFS filesystem for containers)
- <VMID>
-
This specifies the owner VM.
这指定了所属虚拟机。 - <NAME>
-
This can be an arbitrary name (ascii) without white space. The backend uses disk[N] as default, where [N] is replaced by an integer to make the name unique.
这可以是一个任意名称(ASCII),且不含空白字符。后端默认使用 disk[N],其中 [N] 会被替换为一个整数以确保名称唯一。
7.10.3. Storage Features
7.10.3. 存储功能
ZFS is probably the most advanced storage type regarding snapshot and
cloning. The backend uses ZFS datasets for both VM images (format
raw) and container data (format subvol). ZFS properties are
inherited from the parent dataset, so you can simply set defaults
on the parent dataset.
ZFS 可能是快照和克隆方面最先进的存储类型。后端使用 ZFS 数据集来存储虚拟机镜像(格式为 raw)和容器数据(格式为 subvol)。ZFS 属性会从父数据集继承,因此您可以直接在父数据集上设置默认值。
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images rootdir 镜像根目录 |
raw subvol 原始子卷 |
no 否 |
yes 是 |
yes 是 |
7.10.4. Examples 7.10.4. 示例
It is recommended to create an extra ZFS file system to store your VM images:
建议创建一个额外的 ZFS 文件系统来存储您的虚拟机镜像:
# zfs create tank/vmdata
To enable compression on that newly allocated file system:
在新分配的文件系统上启用压缩:
# zfs set compression=on tank/vmdata
You can get a list of available ZFS filesystems with:
您可以使用以下命令获取可用的 ZFS 文件系统列表:
# pvesm scan zfs
7.11. LVM Backend 7.11. LVM 后端
Storage pool type: lvm 存储池类型:lvm
LVM is a light software layer on top of hard disks and partitions. It
can be used to split available disk space into smaller logical
volumes. LVM is widely used on Linux and makes managing hard drives
easier.
LVM 是硬盘和分区之上的一个轻量级软件层。它可以用来将可用的磁盘空间划分成更小的逻辑卷。LVM 在 Linux 上被广泛使用,使硬盘管理更加方便。
Another use case is to put LVM on top of a big iSCSI LUN. That way you
can easily manage space on that iSCSI LUN, which would not be possible
otherwise, because the iSCSI specification does not define a
management interface for space allocation.
另一个用例是将 LVM 放在一个大的 iSCSI LUN 之上。这样你就可以轻松管理该 iSCSI LUN 上的空间,否则这是不可能的,因为 iSCSI 规范并未定义空间分配的管理接口。
7.11.1. Configuration 7.11.1. 配置
The LVM backend supports the common storage properties content, nodes,
disable, and the following LVM specific properties:
LVM 后端支持常见的存储属性 content、nodes、disable 以及以下特定于 LVM 的属性:
- vgname
-
LVM volume group name. This must point to an existing volume group.
LVM 卷组名称。此名称必须指向一个已存在的卷组。 - base
-
Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server.
基础卷。在访问存储之前,该卷会自动激活。当 LVM 卷组位于远程 iSCSI 服务器上时,这一功能尤为有用。 - saferemove
-
Called "Wipe Removed Volumes" in the web UI. Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased and cannot be accessed by other LVs created later (which happen to be assigned the same physical extents). This is a costly operation, but may be required as a security measure in certain environments.
在网页界面中称为“擦除已移除卷”。删除逻辑卷时将数据清零。删除卷时,这确保所有数据被擦除,后续创建的其他逻辑卷(可能被分配了相同的物理区块)无法访问这些数据。这是一个耗时的操作,但在某些环境中可能作为安全措施是必要的。 - saferemove_throughput
-
Wipe throughput (cstream -t parameter value).
擦除吞吐量(cstream -t 参数值)。
配置示例(/etc/pve/storage.cfg)
lvm: myspace
vgname myspace
content rootdir,images
7.11.2. File naming conventions
7.11.2. 文件命名规范
The backend use basically the same naming conventions as the ZFS pool
backend.
后端基本上使用与 ZFS 池后端相同的命名约定。
vm-<VMID>-<NAME> // normal VM images
7.11.3. Storage Features
7.11.3. 存储功能
LVM is a typical block storage, but this backend does not support
snapshots and clones. Unfortunately, normal LVM snapshots are quite
inefficient, because they interfere with all writes on the entire volume
group during snapshot time.
LVM 是一种典型的区块存储,但该后端不支持快照和克隆。不幸的是,普通的 LVM 快照效率较低,因为它们在快照期间会干扰整个卷组的所有写操作。
One big advantage is that you can use it on top of a shared storage,
for example, an iSCSI LUN. The backend itself implements proper cluster-wide
locking.
一个很大的优点是你可以将其用于共享存储之上,例如 iSCSI LUN。后端本身实现了适当的集群范围锁定。
|
|
The newer LVM-thin backend allows snapshots and clones, but does
not support shared storage. 较新的 LVM-thin 后端支持快照和克隆,但不支持共享存储。 |
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images rootdir 镜像根目录 |
raw 原始 |
possible 可能的 |
no 不 |
no 不 |
7.12. LVM thin Backend
7.12. LVM thin 后端
Storage pool type: lvmthin
存储池类型:lvmthin
LVM normally allocates blocks when you create a volume. LVM thin pools
instead allocates blocks when they are written. This behaviour is
called thin-provisioning, because volumes can be much larger than
physically available space.
LVM 通常在创建卷时分配块。LVM 精简池则是在写入时分配块。这种行为称为精简配置,因为卷可以比物理可用空间大得多。
You can use the normal LVM command-line tools to manage and create LVM
thin pools (see man lvmthin for details). Assuming you already have
a LVM volume group called pve, the following commands create a new
LVM thin pool (size 100G) called data:
您可以使用常规的 LVM 命令行工具来管理和创建 LVM 精简池(详情请参见 man lvmthin)。假设您已经有一个名为 pve 的 LVM 卷组,以下命令创建一个新的 LVM 精简池(大小为 100G),名为 data:
lvcreate -L 100G -n data pve lvconvert --type thin-pool pve/data
7.12.1. Configuration 7.12.1. 配置
The LVM thin backend supports the common storage properties content, nodes,
disable, and the following LVM specific properties:
LVM 精简后端支持常见的存储属性 content、nodes、disable,以及以下 LVM 特定属性:
- vgname
-
LVM volume group name. This must point to an existing volume group.
LVM 卷组名称。此名称必须指向一个已存在的卷组。 - thinpool
-
The name of the LVM thin pool.
LVM 细分池的名称。
配置示例(/etc/pve/storage.cfg)
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
7.12.2. File naming conventions
7.12.2. 文件命名规范
The backend use basically the same naming conventions as the ZFS pool
backend.
该后端基本上使用与 ZFS 池后端相同的命名规范。
vm-<VMID>-<NAME> // normal VM images
7.12.3. Storage Features
7.12.3. 存储功能
LVM thin is a block storage, but fully supports snapshots and clones
efficiently. New volumes are automatically initialized with zero.
LVM thin 是一种块存储,但完全支持高效的快照和克隆。新卷会自动初始化为零。
It must be mentioned that LVM thin pools cannot be shared across
multiple nodes, so you can only use them as local storage.
必须提到的是,LVM thin 池不能在多个节点之间共享,因此只能将其用作本地存储。
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images rootdir 镜像根目录 |
raw 原始 |
no 否 |
yes 是 |
yes 是的 |
7.13. Open-iSCSI initiator
7.13. Open-iSCSI 发起端
Storage pool type: iscsi 存储池类型:iscsi
iSCSI is a widely employed technology used to connect to storage
servers. Almost all storage vendors support iSCSI. There are also open
source iSCSI target solutions available,
e.g. OpenMediaVault, which is based on
Debian.
iSCSI 是一种广泛使用的技术,用于连接存储服务器。几乎所有存储厂商都支持 iSCSI。也有开源的 iSCSI 目标解决方案,例如基于 Debian 的 OpenMediaVault。
To use this backend, you need to install the
Open-iSCSI (open-iscsi) package. This is a
standard Debian package, but it is not installed by default to save
resources.
要使用此后端,您需要安装 Open-iSCSI(open-iscsi)包。这是一个标准的 Debian 包,但默认未安装以节省资源。
# apt-get install open-iscsi
Low-level iscsi management task can be done using the iscsiadm tool.
低级别的 iscsi 管理任务可以使用 iscsiadm 工具完成。
7.13.1. Configuration 7.13.1. 配置
The backend supports the common storage properties content, nodes,
disable, and the following iSCSI specific properties:
后端支持常见的存储属性 content、nodes、disable 以及以下 iSCSI 特定属性:
- portal
-
iSCSI portal (IP or DNS name with optional port).
iSCSI 端口(IP 或 DNS 名称,可选端口)。 - target 目标
-
iSCSI target. iSCSI 目标。
配置示例(/etc/pve/storage.cfg)
iscsi: mynas
portal 10.10.10.1
target iqn.2006-01.openfiler.com:tsn.dcb5aaaddd
content none
|
|
If you want to use LVM on top of iSCSI, it make sense to set
content none. That way it is not possible to create VMs using iSCSI
LUNs directly. 如果您想在 iSCSI 之上使用 LVM,设置 content 为 none 是合理的。这样就无法直接使用 iSCSI LUN 创建虚拟机。 |
7.13.2. File naming conventions
7.13.2. 文件命名规范
The iSCSI protocol does not define an interface to allocate or delete
data. Instead, that needs to be done on the target side and is vendor
specific. The target simply exports them as numbered LUNs. So Proxmox VE
iSCSI volume names just encodes some information about the LUN as seen
by the linux kernel.
iSCSI 协议并未定义分配或删除数据的接口。这需要在目标端完成,并且是厂商特定的。目标端只是将它们作为编号的 LUN 导出。因此,Proxmox VE iSCSI 卷名称仅编码了 Linux 内核所见的 LUN 的一些信息。
7.13.3. Storage Features
7.13.3. 存储特性
iSCSI is a block level type storage, and provides no management
interface. So it is usually best to export one big LUN, and setup LVM
on top of that LUN. You can then use the LVM plugin to manage the
storage on that iSCSI LUN.
iSCSI 是一种块级存储类型,不提供管理接口。因此,通常最好导出一个大的 LUN,并在该 LUN 上设置 LVM。然后,您可以使用 LVM 插件来管理该 iSCSI LUN 上的存储。
| Content types 内容类型 | Image formats 图像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images none 镜像 无 |
raw 原始 |
yes 是 |
no 否 |
no 否 |
7.14. User Mode iSCSI Backend
7.14. 用户模式 iSCSI 后端
Storage pool type: iscsidirect
存储池类型:iscsidirect
This backend provides basically the same functionality as the Open-iSCSI backed,
but uses a user-level library to implement it. You need to install the
libiscsi-bin package in order to use this backend.
该后端基本上提供与 Open-iSCSI 后端相同的功能,但使用用户级库来实现。您需要安装 libiscsi-bin 包才能使用此后端。
It should be noted that there are no kernel drivers involved, so this
can be viewed as performance optimization. But this comes with the
drawback that you cannot use LVM on top of such iSCSI LUN. So you need
to manage all space allocations at the storage server side.
需要注意的是,这里不涉及内核驱动,因此这可以被视为性能优化。但这也带来了一个缺点,即你不能在这样的 iSCSI LUN 之上使用 LVM。因此,你需要在存储服务器端管理所有的空间分配。
7.14.1. Configuration 7.14.1. 配置
The user mode iSCSI backend uses the same configuration options as the
Open-iSCSI backed.
用户态 iSCSI 后端使用与 Open-iSCSI 后端相同的配置选项。
配置示例(/etc/pve/storage.cfg)
iscsidirect: faststore
portal 10.10.10.1
target iqn.2006-01.openfiler.com:tsn.dcb5aaaddd
7.14.2. Storage Features
7.14.2. 存储功能
|
|
This backend works with VMs only. Containers cannot use this
driver. 此后端仅适用于虚拟机。容器无法使用此驱动程序。 |
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images 镜像 |
raw 原始 |
yes 是 |
no 否 |
no 否 |
7.15. Ceph RADOS Block Devices (RBD)
7.15. Ceph RADOS 块设备(RBD)
Storage pool type: rbd 存储池类型:rbd
Ceph is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
Ceph 是一个分布式对象存储和文件系统,旨在提供卓越的性能、可靠性和可扩展性。RADOS 块设备实现了功能丰富的块级存储,您将获得以下优势:
-
thin provisioning 精简配置
-
resizable volumes 可调整大小的卷
-
distributed and redundant (striped over multiple OSDs)
分布式且冗余(跨多个 OSD 条带化) -
full snapshot and clone capabilities
完整的快照和克隆功能 -
self healing 自我修复
-
no single point of failure
无单点故障 -
scalable to the exabyte level
可扩展至艾字节级别 -
kernel and user space implementation available
提供内核和用户空间实现
|
|
For smaller deployments, it is also possible to run Ceph
services directly on your Proxmox VE nodes. Recent hardware has plenty
of CPU power and RAM, so running storage services and VMs on same node
is possible. 对于较小的部署,也可以直接在您的 Proxmox VE 节点上运行 Ceph 服务。近年来的硬件拥有充足的 CPU 性能和内存,因此在同一节点上运行存储服务和虚拟机是可行的。 |
7.15.1. Configuration 7.15.1. 配置
This backend supports the common storage properties nodes,
disable, content, and the following rbd specific properties:
该后端支持常见的存储属性节点、禁用、内容,以及以下 rbd 特定属性:
- monhost
-
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the Proxmox VE cluster.
监视守护进程的 IP 列表。可选,仅当 Ceph 未在 Proxmox VE 集群上运行时需要。 - pool 池
-
Ceph pool name. Ceph 池名称。
- username 用户名
-
RBD user ID. Optional, only needed if Ceph is not running on the Proxmox VE cluster. Note that only the user ID should be used. The "client." type prefix must be left out.
RBD 用户 ID。可选,仅在 Ceph 未在 Proxmox VE 集群上运行时需要。请注意,只应使用用户 ID,必须省略“client.”类型的前缀。 - krbd
-
Enforce access to rados block devices through the krbd kernel module. Optional.
通过 krbd 内核模块强制访问 rados 块设备。可选。
|
|
Containers will use krbd independent of the option value. 容器将独立于该选项值使用 krbd。 |
外部 Ceph 集群的配置示例(/etc/pve/storage.cfg)
rbd: ceph-external
monhost 10.1.1.20 10.1.1.21 10.1.1.22
pool ceph-external
content images
username admin
|
|
You can use the rbd utility to do low-level management tasks. 您可以使用 rbd 工具执行低级管理任务。 |
7.15.2. Authentication 7.15.2. 认证
|
|
If Ceph is installed locally on the Proxmox VE cluster, the following is done
automatically when adding the storage. 如果 Ceph 已在 Proxmox VE 集群本地安装,添加存储时会自动完成以下操作。 |
If you use cephx authentication, which is enabled by default, you need to
provide the keyring from the external Ceph cluster.
如果您使用 cephx 认证(默认启用),则需要提供来自外部 Ceph 集群的 keyring。
To configure the storage via the CLI, you first need to make the file
containing the keyring available. One way is to copy the file from the external
Ceph cluster directly to one of the Proxmox VE nodes. The following example will
copy it to the /root directory of the node on which we run it:
要通过 CLI 配置存储,首先需要使包含 keyring 的文件可用。一种方法是将该文件直接从外部 Ceph 集群复制到其中一个 Proxmox VE 节点。以下示例将其复制到我们运行命令的节点的 /root 目录:
# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
Then use the pvesm CLI tool to configure the external RBD storage, use the
--keyring parameter, which needs to be a path to the keyring file that you
copied. For example:
然后使用 pvesm CLI 工具配置外部 RBD 存储,使用 --keyring 参数,该参数需要是您复制的 keyring 文件的路径。例如:
# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
When configuring an external RBD storage via the GUI, you can copy and paste
the keyring into the appropriate field.
通过 GUI 配置外部 RBD 存储时,您可以将 keyring 复制并粘贴到相应字段中。
The keyring will be stored at
密钥环将存储在
# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
|
|
Creating a keyring with only the needed capabilities is recommend when
connecting to an external cluster. For further information on Ceph user
management, see the Ceph docs.[13] 连接到外部集群时,建议仅创建具有所需权限的密钥环。有关 Ceph 用户管理的更多信息,请参见 Ceph 文档。[13] |
7.15.3. Ceph client configuration (optional)
7.15.3. Ceph 客户端配置(可选)
Connecting to an external Ceph storage doesn’t always allow setting
client-specific options in the config DB on the external cluster. You can add a
ceph.conf beside the Ceph keyring to change the Ceph client configuration for
the storage.
连接到外部 Ceph 存储时,并不总是允许在外部集群的配置数据库中设置客户端特定选项。您可以在 Ceph 密钥环旁边添加一个 ceph.conf 文件,以更改存储的 Ceph 客户端配置。
The ceph.conf needs to have the same name as the storage.
ceph.conf 文件需要与存储名称相同。
# /etc/pve/priv/ceph/<STORAGE_ID>.conf
See the RBD configuration reference [14] for possible settings.
有关可能的设置,请参见 RBD 配置参考 [14]。
|
|
Do not change these settings lightly. Proxmox VE is merging the
<STORAGE_ID>.conf with the storage configuration. 请勿轻易更改这些设置。Proxmox VE 会将 <STORAGE_ID>.conf 与存储配置合并。 |
7.15.4. Storage Features
7.15.4. 存储功能
The rbd backend is a block level storage, and implements full
snapshot and clone functionality.
rbd 后端是一种块级存储,实现了完整的快照和克隆功能。
| Content types 内容类型 | Image formats 镜像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images rootdir 镜像根目录 |
raw 原始 |
yes 是 |
yes 是 |
yes 是 |
7.16. Ceph Filesystem (CephFS)
7.16. Ceph 文件系统(CephFS)
Storage pool type: cephfs
存储池类型:cephfs
CephFS implements a POSIX-compliant filesystem, using a Ceph
storage cluster to store its data. As CephFS builds upon Ceph, it shares most of
its properties. This includes redundancy, scalability, self-healing, and high
availability.
CephFS 实现了一个符合 POSIX 标准的文件系统,使用 Ceph 存储集群来存储其数据。由于 CephFS 构建于 Ceph 之上,因此它共享 Ceph 的大部分特性,包括冗余、可扩展性、自我修复和高可用性。
|
|
Proxmox VE can manage Ceph setups, which makes
configuring a CephFS storage easier. As modern hardware offers a lot of
processing power and RAM, running storage services and VMs on same node is
possible without a significant performance impact. Proxmox VE 可以管理 Ceph 配置,这使得配置 CephFS 存储更加容易。由于现代硬件提供了大量的处理能力和内存,在同一节点上运行存储服务和虚拟机成为可能,且不会对性能产生显著影响。 |
To use the CephFS storage plugin, you must replace the stock Debian Ceph client,
by adding our Ceph repository.
Once added, run apt update, followed by apt dist-upgrade, in order to get
the newest packages.
要使用 CephFS 存储插件,您必须通过添加我们的 Ceph 代码仓库来替换默认的 Debian Ceph 客户端。添加后,运行 apt update,然后运行 apt dist-upgrade,以获取最新的软件包。
|
|
Please ensure that there are no other Ceph repositories configured.
Otherwise the installation will fail or there will be mixed package versions on
the node, leading to unexpected behavior. 请确保没有配置其他 Ceph 代码仓库。否则安装将失败,或者节点上会出现混合的软件包版本,导致意外行为。 |
7.16.1. Configuration 7.16.1. 配置
This backend supports the common storage properties nodes,
disable, content, as well as the following cephfs specific properties:
该后端支持常见的存储属性 nodes、disable、content,以及以下 cephfs 特定属性:
- fs-name
-
Name of the Ceph FS.
Ceph 文件系统的名称。 - monhost
-
List of monitor daemon addresses. Optional, only needed if Ceph is not running on the Proxmox VE cluster.
监视守护进程地址列表。可选,仅当 Ceph 未在 Proxmox VE 集群上运行时需要。 - path 路径
-
The local mount point. Optional, defaults to /mnt/pve/<STORAGE_ID>/.
本地挂载点。可选,默认为 /mnt/pve/<STORAGE_ID>/。 - username 用户名
-
Ceph user id. Optional, only needed if Ceph is not running on the Proxmox VE cluster, where it defaults to admin.
Ceph 用户 ID。可选,仅当 Ceph 未在 Proxmox VE 集群上运行时需要,默认值为 admin。 - subdir 子目录
-
CephFS subdirectory to mount. Optional, defaults to /.
要挂载的 CephFS 子目录。可选,默认为 /。 - fuse
-
Access CephFS through FUSE, instead of the kernel client. Optional, defaults to 0.
通过 FUSE 访问 CephFS,而不是内核客户端。可选,默认为 0。
外部 Ceph 集群的配置示例(/etc/pve/storage.cfg)
cephfs: cephfs-external
monhost 10.1.1.20 10.1.1.21 10.1.1.22
path /mnt/pve/cephfs-external
content backup
username admin
fs-name cephfs
|
|
Don’t forget to set up the client’s secret key file, if cephx was not
disabled. 如果未禁用 cephx,请不要忘记设置客户端的密钥文件。 |
7.16.2. Authentication 7.16.2. 认证
|
|
If Ceph is installed locally on the Proxmox VE cluster, the following is done
automatically when adding the storage. 如果 Ceph 安装在 Proxmox VE 集群的本地,添加存储时会自动完成以下操作。 |
If you use cephx authentication, which is enabled by default, you need to
provide the secret from the external Ceph cluster.
如果您使用 cephx 认证(默认启用),则需要提供来自外部 Ceph 集群的密钥。
To configure the storage via the CLI, you first need to make the file
containing the secret available. One way is to copy the file from the external
Ceph cluster directly to one of the Proxmox VE nodes. The following example will
copy it to the /root directory of the node on which we run it:
要通过 CLI 配置存储,首先需要使包含密钥的文件可用。一种方法是将该文件直接从外部 Ceph 集群复制到其中一个 Proxmox VE 节点。以下示例将其复制到我们运行命令的节点的 /root 目录:
# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret
Then use the pvesm CLI tool to configure the external RBD storage, use the
--keyring parameter, which needs to be a path to the secret file that you
copied. For example:
然后使用 pvesm CLI 工具配置外部 RBD 存储,使用 --keyring 参数,该参数需要是您复制的密钥文件的路径。例如:
# pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret
When configuring an external RBD storage via the GUI, you can copy and paste
the secret into the appropriate field.
通过 GUI 配置外部 RBD 存储时,您可以将密钥复制并粘贴到相应字段中。
The secret is only the key itself, as opposed to the rbd backend which also
contains a [client.userid] section.
密钥仅指密钥本身,而不是包含[client.userid]部分的 rbd 后端。
The secret will be stored at
密钥将被存储在
# /etc/pve/priv/ceph/<STORAGE_ID>.secret
A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
command below, where userid is the client ID that has been configured to
access the cluster. For further information on Ceph user management, see the
Ceph docs.[13]
可以通过以下命令从 Ceph 集群(以 Ceph 管理员身份)获取密钥,其中 userid 是已配置为访问集群的客户端 ID。有关 Ceph 用户管理的更多信息,请参见 Ceph 文档。[13]
# ceph auth get-key client.userid > cephfs.secret
7.16.3. Storage Features
7.16.3. 存储功能
The cephfs backend is a POSIX-compliant filesystem, on top of a Ceph cluster.
cephfs 后端是一个符合 POSIX 标准的文件系统,构建在 Ceph 集群之上。
| Content types 内容类型 | Image formats 图像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
vztmpl iso backup snippets |
none 无 |
yes 是 |
yes[1] 是 [1] |
no 否 |
[1] While no known bugs exist, snapshots are not yet guaranteed to be stable,
as they lack sufficient testing.
[1] 虽然目前没有已知的错误,但快照尚未保证稳定,因为缺乏足够的测试。
7.17. BTRFS Backend 7.17. BTRFS 后端
Storage pool type: btrfs 存储池类型:btrfs
On the surface, this storage type is very similar to the directory storage type,
so see the directory backend section for a general overview.
表面上,这种存储类型与目录存储类型非常相似,因此请参阅目录后端部分以获取一般概述。
The main difference is that with this storage type raw formatted disks will be
placed in a subvolume, in order to allow taking snapshots and supporting offline
storage migration with snapshots being preserved.
主要区别在于,这种存储类型的原始格式化磁盘将被放置在子卷中,以便允许快照的创建并支持离线存储迁移,同时保留快照。
|
|
BTRFS will honor the O_DIRECT flag when opening files, meaning VMs
should not use cache mode none, otherwise there will be checksum errors. BTRFS 在打开文件时会遵守 O_DIRECT 标志,这意味着虚拟机不应使用缓存模式 none,否则会出现校验和错误。 |
7.17.1. Configuration 7.17.1. 配置
This backend is configured similarly to the directory storage. Note that when
adding a directory as a BTRFS storage, which is not itself also the mount point,
it is highly recommended to specify the actual mount point via the
is_mountpoint option.
此后端的配置方式与目录存储类似。请注意,当添加一个作为 BTRFS 存储的目录时,如果该目录本身不是挂载点,强烈建议通过 is_mountpoint 选项指定实际的挂载点。
For example, if a BTRFS file system is mounted at /mnt/data2 and its
pve-storage/ subdirectory (which may be a snapshot, which is recommended)
should be added as a storage pool called data2, you can use the following
entry:
例如,如果一个 BTRFS 文件系统挂载在/mnt/data2,并且其 pve-storage/子目录(可能是快照,推荐使用快照)应作为名为 data2 的存储池添加,可以使用以下条目:
btrfs: data2
path /mnt/data2/pve-storage
content rootdir,images
is_mountpoint /mnt/data2
7.18. ZFS over ISCSI Backend
7.18. 基于 ISCSI 的 ZFS 后端
Storage pool type: zfs 存储池类型:zfs
This backend accesses a remote machine having a ZFS pool as storage and an iSCSI
target implementation via ssh. For each guest disk it creates a ZVOL and,
exports it as iSCSI LUN. This LUN is used by Proxmox VE for the guest disk.
该后端通过 ssh 访问具有 ZFS 存储池和 iSCSI 目标实现的远程机器。对于每个虚拟机磁盘,它创建一个 ZVOL,并将其导出为 iSCSI LUN。Proxmox VE 使用此 LUN 作为虚拟机磁盘。
The following iSCSI target implementations are supported:
支持以下 iSCSI 目标实现:
-
LIO (Linux) LIO(Linux)
-
IET (Linux) IET(Linux)
-
ISTGT (FreeBSD) ISTGT(FreeBSD)
-
Comstar (Solaris) Comstar(Solaris)
|
|
This plugin needs a ZFS capable remote storage appliance, you cannot use
it to create a ZFS Pool on a regular Storage Appliance/SAN 此插件需要支持 ZFS 的远程存储设备,不能用它在普通存储设备/SAN 上创建 ZFS 池 |
7.18.1. Configuration 7.18.1. 配置
In order to use the ZFS over iSCSI plugin you need to configure the remote
machine (target) to accept ssh connections from the Proxmox VE node. Proxmox VE connects to the target for creating the ZVOLs and exporting them via iSCSI.
Authentication is done through a ssh-key (without password protection) stored in
/etc/pve/priv/zfs/<target_ip>_id_rsa
为了使用 ZFS over iSCSI 插件,您需要配置远程机器(目标)以接受来自 Proxmox VE 节点的 ssh 连接。Proxmox VE 通过连接目标来创建 ZVOL 并通过 iSCSI 导出它们。认证通过存储在 /etc/pve/priv/zfs/<target_ip>_id_rsa 中的无密码保护的 ssh 密钥完成。
The following steps create a ssh-key and distribute it to the storage machine
with IP 192.0.2.1:
以下步骤创建一个 ssh 密钥并将其分发到 IP 为 192.0.2.1 的存储机器:
mkdir /etc/pve/priv/zfs ssh-keygen -f /etc/pve/priv/zfs/192.0.2.1_id_rsa ssh-copy-id -i /etc/pve/priv/zfs/192.0.2.1_id_rsa.pub root@192.0.2.1 ssh -i /etc/pve/priv/zfs/192.0.2.1_id_rsa root@192.0.2.1
The backend supports the common storage properties content, nodes,
disable, and the following ZFS over ISCSI specific properties:
后端支持常见的存储属性 content、nodes、disable,以及以下 ZFS over iSCSI 特定属性:
- pool 存储池
-
The ZFS pool/filesystem on the iSCSI target. All allocations are done within that pool.
iSCSI 目标上的 ZFS 存储池/文件系统。所有分配都在该存储池内完成。 - portal 门户
-
iSCSI portal (IP or DNS name with optional port).
iSCSI 门户(IP 或 DNS 名称,可选端口)。 - target 目标
-
iSCSI target. iSCSI 目标。
- iscsiprovider
-
The iSCSI target implementation used on the remote machine
远程机器上使用的 iSCSI 目标实现 - comstar_tg
-
target group for comstar views.
comstar 视图的目标组。 - comstar_hg
-
host group for comstar views.
comstar 视图的主机组。 - lio_tpg
-
target portal group for Linux LIO targets
Linux LIO 目标的目标门户组 - nowritecache
-
disable write caching on the target
禁用目标上的写缓存 - blocksize 块大小
-
Set ZFS blocksize parameter.
设置 ZFS 块大小参数。 - sparse 稀疏
-
Use ZFS thin-provisioning. A sparse volume is a volume whose reservation is not equal to the volume size.
使用 ZFS 薄配置。稀疏卷是指其保留空间不等于卷大小的卷。
配置示例(/etc/pve/storage.cfg)
zfs: lio blocksize 4k iscsiprovider LIO pool tank portal 192.0.2.111 target iqn.2003-01.org.linux-iscsi.lio.x8664:sn.xxxxxxxxxxxx content images lio_tpg tpg1 sparse 1 zfs: solaris blocksize 4k target iqn.2010-08.org.illumos:02:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx:tank1 pool tank iscsiprovider comstar portal 192.0.2.112 content images zfs: freebsd blocksize 4k target iqn.2007-09.jp.ne.peach.istgt:tank1 pool tank iscsiprovider istgt portal 192.0.2.113 content images zfs: iet blocksize 4k target iqn.2001-04.com.example:tank1 pool tank iscsiprovider iet portal 192.0.2.114 content images
7.18.2. Storage Features
7.18.2. 存储功能
The ZFS over iSCSI plugin provides a shared storage, which is capable of
snapshots. You need to make sure that the ZFS appliance does not become a single
point of failure in your deployment.
ZFS over iSCSI 插件提供了支持快照的共享存储。您需要确保 ZFS 设备不会成为部署中的单点故障。
| Content types 内容类型 | Image formats 图像格式 | Shared 共享 | Snapshots 快照 | Clones 克隆 |
|---|---|---|---|---|
images 镜像 |
raw 原始 |
yes 是 |
yes 是 |
no 否 |
8. Deploy Hyper-Converged Ceph Cluster
8. 部署超融合 Ceph 集群
8.1. Introduction 8.1. 介绍
Proxmox VE unifies your compute and storage systems, that is, you can use the same
physical nodes within a cluster for both computing (processing VMs and
containers) and replicated storage. The traditional silos of compute and
storage resources can be wrapped up into a single hyper-converged appliance.
Separate storage networks (SANs) and connections via network attached storage
(NAS) disappear. With the integration of Ceph, an open source software-defined
storage platform, Proxmox VE has the ability to run and manage Ceph storage directly
on the hypervisor nodes.
Proxmox VE 统一了您的计算和存储系统,也就是说,您可以在集群中使用相同的物理节点来进行计算(处理虚拟机和容器)和复制存储。传统的计算和存储资源孤岛可以整合成一个超融合设备。独立的存储网络(SAN)和通过网络附加存储(NAS)的连接将不复存在。通过集成 Ceph 这一开源软件定义存储平台,Proxmox VE 能够直接在虚拟机监控程序节点上运行和管理 Ceph 存储。
Ceph is a distributed object store and file system designed to provide
excellent performance, reliability and scalability.
Ceph 是一个分布式对象存储和文件系统,旨在提供卓越的性能、可靠性和可扩展性。
Ceph 在 Proxmox VE 上的一些优势包括:
-
Easy setup and management via CLI and GUI
通过命令行界面(CLI)和图形用户界面(GUI)实现简便的设置和管理 -
Thin provisioning 精简配置
-
Snapshot support 快照支持
-
Self healing 自我修复
-
Scalable to the exabyte level
可扩展至艾字节级别 -
Provides block, file system, and object storage
提供区块、文件系统和对象存储 -
Setup pools with different performance and redundancy characteristics
设置具有不同性能和冗余特性的存储池 -
Data is replicated, making it fault tolerant
数据被复制,实现容错 -
Runs on commodity hardware
运行在通用硬件上 -
No need for hardware RAID controllers
无需硬件 RAID 控制器 -
Open source 开源
For small to medium-sized deployments, it is possible to install a Ceph server
for using RADOS Block Devices (RBD) or CephFS directly on your Proxmox VE cluster
nodes (see Ceph RADOS Block Devices (RBD)).
Recent hardware has a lot of CPU power and RAM, so running storage services and
virtual guests on the same node is possible.
对于中小型部署,可以在 Proxmox VE 集群节点上直接安装 Ceph 服务器,以使用 RADOS 区块设备(RBD)或 CephFS(参见 Ceph RADOS 区块设备(RBD))。现代硬件拥有强大的 CPU 性能和内存,因此可以在同一节点上运行存储服务和虚拟客户机。
To simplify management, Proxmox VE provides you native integration to install and
manage Ceph services on Proxmox VE nodes either via the built-in web interface, or
using the pveceph command line tool.
为了简化管理,Proxmox VE 为您提供了原生集成,可以通过内置的网页界面或使用 pveceph 命令行工具,在 Proxmox VE 节点上安装和管理 Ceph 服务。
8.2. Terminology 8.2. 术语
Ceph 由多个守护进程组成,用于作为 RBD 存储:
-
Ceph Monitor (ceph-mon, or MON)
Ceph 监视器(ceph-mon,或 MON) -
Ceph Manager (ceph-mgr, or MGS)
Ceph 管理器(ceph-mgr,或 MGS) -
Ceph Metadata Service (ceph-mds, or MDS)
Ceph 元数据服务(ceph-mds,或称 MDS) -
Ceph Object Storage Daemon (ceph-osd, or OSD)
Ceph 对象存储守护进程(ceph-osd,或称 OSD)
8.3. Recommendations for a Healthy Ceph Cluster
8.3. 健康 Ceph 集群的建议
To build a hyper-converged Proxmox + Ceph Cluster, you must use at least three
(preferably) identical servers for the setup.
要构建一个超融合的 Proxmox + Ceph 集群,您必须使用至少三台(最好是)相同的服务器进行设置。
Check also the recommendations from
Ceph’s website.
还请查看 Ceph 官方网站的建议。
|
|
The recommendations below should be seen as a rough guidance for choosing
hardware. Therefore, it is still essential to adapt it to your specific needs.
You should test your setup and monitor health and performance continuously. 以下建议应被视为选择硬件的大致指导。因此,仍然需要根据您的具体需求进行调整。您应测试您的设置并持续监控健康状况和性能。 |
Ceph services can be classified into two categories:
Ceph 服务可以分为两类:
-
Intensive CPU usage, benefiting from high CPU base frequencies and multiple cores. Members of that category are:
高强度 CPU 使用,受益于高 CPU 基础频率和多核。属于该类别的有:-
Object Storage Daemon (OSD) services
对象存储守护进程(OSD)服务 -
Meta Data Service (MDS) used for CephFS
用于 CephFS 的元数据服务(MDS)
-
-
Moderate CPU usage, not needing multiple CPU cores. These are:
中等的 CPU 使用率,不需要多个 CPU 核心。这些是:-
Monitor (MON) services 监控(MON)服务
-
Manager (MGR) services 管理(MGR)服务
-
As a simple rule of thumb, you should assign at least one CPU core (or thread)
to each Ceph service to provide the minimum resources required for stable and
durable Ceph performance.
作为一个简单的经验法则,您应该为每个 Ceph 服务分配至少一个 CPU 核心(或线程),以提供 Ceph 稳定且持久性能所需的最低资源。
For example, if you plan to run a Ceph monitor, a Ceph manager and 6 Ceph OSDs
services on a node you should reserve 8 CPU cores purely for Ceph when targeting
basic and stable performance.
例如,如果您计划在一个节点上运行一个 Ceph 监视器、一个 Ceph 管理器和 6 个 Ceph OSD 服务,那么在追求基本且稳定的性能时,应为 Ceph 预留 8 个 CPU 核心。
Note that OSDs CPU usage depend mostly from the disks performance. The higher
the possible IOPS (IO Operations per Second) of a disk, the more CPU
can be utilized by a OSD service.
For modern enterprise SSD disks, like NVMe’s that can permanently sustain a high
IOPS load over 100’000 with sub millisecond latency, each OSD can use multiple
CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
likely for very high performance disks.
请注意,OSD 的 CPU 使用率主要取决于磁盘性能。磁盘的 IOPS(每秒输入输出操作次数)越高,OSD 服务所能利用的 CPU 也越多。对于现代企业级 SSD 磁盘,如 NVMe,其可以持续承受超过 100,000 的高 IOPS 负载且延迟低于毫秒级,每个 OSD 可能会使用多个 CPU 线程,例如,每个基于 NVMe 的 OSD 使用四到六个 CPU 线程是非常高性能磁盘的常见情况。
Especially in a hyper-converged setup, the memory consumption needs to be
carefully planned out and monitored. In addition to the predicted memory usage
of virtual machines and containers, you must also account for having enough
memory available for Ceph to provide excellent and stable performance.
尤其是在超融合架构中,内存消耗需要仔细规划和监控。除了预测的虚拟机和容器的内存使用外,还必须确保有足够的内存供 Ceph 使用,以提供优异且稳定的性能。
As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used
by an OSD. While the usage might be less under normal conditions, it will use
most during critical operations like recovery, re-balancing or backfilling.
That means that you should avoid maxing out your available memory already on
normal operation, but rather leave some headroom to cope with outages.
作为经验法则,大约每 1 TiB 的数据,OSD 将使用 1 GiB 的内存。虽然在正常情况下使用量可能较少,但在恢复、重新平衡或回填等关键操作期间,内存使用量会达到最高。这意味着你不应在正常操作时就将可用内存用尽,而应留出一定的余量以应对故障。
The OSD service itself will use additional memory. The Ceph BlueStore backend of
the daemon requires by default 3-5 GiB of memory (adjustable).
OSD 服务本身还会使用额外的内存。守护进程的 Ceph BlueStore 后端默认需要 3-5 GiB 的内存(可调节)。
We recommend a network bandwidth of at least 10 Gbps, or more, to be used
exclusively for Ceph traffic. A meshed network setup
[18]
is also an option for three to five node clusters, if there are no 10+ Gbps
switches available.
我们建议至少使用 10 Gbps 或更高的网络带宽,专门用于 Ceph 流量。如果没有 10 Gbps 以上的交换机,三到五节点的集群也可以选择网状网络配置[18]。
|
|
The volume of traffic, especially during recovery, will interfere
with other services on the same network, especially the latency sensitive Proxmox VE
corosync cluster stack can be affected, resulting in possible loss of cluster
quorum. Moving the Ceph traffic to dedicated and physical separated networks
will avoid such interference, not only for corosync, but also for the networking
services provided by any virtual guests. 流量的大小,尤其是在恢复期间,会干扰同一网络上的其他服务,特别是对延迟敏感的 Proxmox VE corosync 集群堆栈可能会受到影响,导致集群仲裁丢失。将 Ceph 流量迁移到专用且物理隔离的网络,可以避免这种干扰,不仅对 corosync 有效,也对任何虚拟客户机提供的网络服务有效。 |
For estimating your bandwidth needs, you need to take the performance of your
disks into account.. While a single HDD might not saturate a 1 Gb link, multiple
HDD OSDs per node can already saturate 10 Gbps too.
If modern NVMe-attached SSDs are used, a single one can already saturate 10 Gbps
of bandwidth, or more. For such high-performance setups we recommend at least
a 25 Gpbs, while even 40 Gbps or 100+ Gbps might be required to utilize the full
performance potential of the underlying disks.
在估算带宽需求时,需要考虑磁盘的性能。虽然单个 HDD 可能无法饱和 1 Gb 链路,但每个节点的多个 HDD OSD 已经可以饱和 10 Gbps。如果使用现代 NVMe 连接的 SSD,单个 SSD 就能饱和 10 Gbps 或更高的带宽。对于这种高性能配置,我们建议至少使用 25 Gbps,甚至可能需要 40 Gbps 或 100+ Gbps,以充分利用底层磁盘的性能潜力。
If unsure, we recommend using three (physical) separate networks for
high-performance setups:
如果不确定,我们建议高性能配置使用三条(物理)独立网络:
-
one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster traffic.
一条非常高带宽(25+ Gbps)的网络,用于 Ceph(内部)集群流量。 -
one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the ceph server and ceph client storage traffic. Depending on your needs this can also be used to host the virtual guest traffic and the VM live-migration traffic.
一个高带宽(10+ Gbps)网络,用于 Ceph(公共)流量,在 Ceph 服务器和 Ceph 客户端存储流量之间。根据您的需求,这个网络也可以用来承载虚拟客户机流量和虚拟机实时迁移流量。 -
one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync cluster communication.
一个中等带宽(1 Gbps)专用于对延迟敏感的 corosync 集群通信。
When planning the size of your Ceph cluster, it is important to take the
recovery time into consideration. Especially with small clusters, recovery
might take long. It is recommended that you use SSDs instead of HDDs in small
setups to reduce recovery time, minimizing the likelihood of a subsequent
failure event during recovery.
在规划 Ceph 集群的规模时,考虑恢复时间非常重要。尤其是在小型集群中,恢复可能需要较长时间。建议在小型配置中使用 SSD 而非 HDD,以减少恢复时间,降低恢复期间发生后续故障事件的可能性。
In general, SSDs will provide more IOPS than spinning disks. With this in mind,
in addition to the higher cost, it may make sense to implement a
class based separation of pools. Another way to
speed up OSDs is to use a faster disk as a journal or
DB/Write-Ahead-Log device, see
creating Ceph OSDs.
If a faster disk is used for multiple OSDs, a proper balance between OSD
and WAL / DB (or journal) disk must be selected, otherwise the faster disk
becomes the bottleneck for all linked OSDs.
一般来说,SSD 提供的 IOPS 会比机械硬盘更多。考虑到这一点,除了更高的成本外,基于类的存储池分离可能是合理的。加速 OSD 的另一种方法是使用更快的硬盘作为日志或 DB/预写日志设备,详见创建 Ceph OSD。如果一块更快的硬盘被多个 OSD 共享,则必须在 OSD 和 WAL/DB(或日志)硬盘之间选择合适的平衡,否则这块更快的硬盘会成为所有关联 OSD 的瓶颈。
Aside from the disk type, Ceph performs best with an evenly sized, and an evenly
distributed amount of disks per node. For example, 4 x 500 GB disks within each
node is better than a mixed setup with a single 1 TB and three 250 GB disk.
除了硬盘类型外,Ceph 在每个节点拥有大小均匀且数量均衡的硬盘时表现最佳。例如,每个节点配备 4 块 500 GB 硬盘要优于混合配置的单块 1 TB 和三块 250 GB 硬盘。
You also need to balance OSD count and single OSD capacity. More capacity
allows you to increase storage density, but it also means that a single OSD
failure forces Ceph to recover more data at once.
你还需要平衡 OSD 数量和单个 OSD 容量。更大的容量可以提高存储密度,但也意味着单个 OSD 故障时,Ceph 需要一次性恢复更多数据。
As Ceph handles data object redundancy and multiple parallel writes to disks
(OSDs) on its own, using a RAID controller normally doesn’t improve
performance or availability. On the contrary, Ceph is designed to handle whole
disks on it’s own, without any abstraction in between. RAID controllers are not
designed for the Ceph workload and may complicate things and sometimes even
reduce performance, as their write and caching algorithms may interfere with
the ones from Ceph.
由于 Ceph 自身处理数据对象冗余和对磁盘(OSD)的多路并行写入,使用 RAID 控制器通常不会提升性能或可用性。相反,Ceph 设计为直接管理整个磁盘,不需要任何中间抽象层。RAID 控制器并非为 Ceph 工作负载设计,可能会使情况复杂化,有时甚至降低性能,因为它们的写入和缓存算法可能与 Ceph 的算法产生干扰。
|
|
Avoid RAID controllers. Use host bus adapter (HBA) instead. 避免使用 RAID 控制器。应使用主机总线适配器(HBA)。 |
8.4. Initial Ceph Installation & Configuration
8.4. Ceph 的初始安装与配置
8.4.1. Using the Web-based Wizard
8.4.1. 使用基于网页的向导
With Proxmox VE you have the benefit of an easy to use installation wizard
for Ceph. Click on one of your cluster nodes and navigate to the Ceph
section in the menu tree. If Ceph is not already installed, you will see a
prompt offering to do so.
使用 Proxmox VE,您可以享受一个易于使用的 Ceph 安装向导。点击您的集群中的一个节点,然后在菜单树中导航到 Ceph 部分。如果尚未安装 Ceph,您将看到一个提示,提供安装选项。
The wizard is divided into multiple sections, where each needs to
finish successfully, in order to use Ceph.
该向导分为多个部分,每个部分都需要成功完成,才能使用 Ceph。
First you need to chose which Ceph version you want to install. Prefer the one
from your other nodes, or the newest if this is the first node you install
Ceph.
首先,您需要选择要安装的 Ceph 版本。优先选择您其他节点上的版本,或者如果这是您安装 Ceph 的第一个节点,则选择最新版本。
After starting the installation, the wizard will download and install all the
required packages from Proxmox VE’s Ceph repository.
开始安装后,向导将从 Proxmox VE 的 Ceph 代码仓库下载并安装所有必需的软件包。
After finishing the installation step, you will need to create a configuration.
This step is only needed once per cluster, as this configuration is distributed
automatically to all remaining cluster members through Proxmox VE’s clustered
configuration file system (pmxcfs).
完成安装步骤后,您需要创建一个配置。此步骤每个集群只需执行一次,因为该配置会通过 Proxmox VE 的集群配置文件系统(pmxcfs)自动分发到所有其他集群成员。
The configuration step includes the following settings:
配置步骤包括以下设置:
-
Public Network: This network will be used for public storage communication (e.g., for virtual machines using a Ceph RBD backed disk, or a CephFS mount), and communication between the different Ceph services. This setting is required.
公共网络:该网络将用于公共存储通信(例如,使用 Ceph RBD 支持的磁盘的虚拟机,或 CephFS 挂载),以及不同 Ceph 服务之间的通信。此设置为必填项。
Separating your Ceph traffic from the Proxmox VE cluster communication (corosync), and possible the front-facing (public) networks of your virtual guests, is highly recommended. Otherwise, Ceph’s high-bandwidth IO-traffic could cause interference with other low-latency dependent services.
强烈建议将您的 Ceph 流量与 Proxmox VE 集群通信(corosync)以及虚拟机的前端(公共)网络分开。否则,Ceph 的大带宽 IO 流量可能会干扰其他依赖低延迟的服务。 -
Cluster Network: Specify to separate the OSD replication and heartbeat traffic as well. This setting is optional.
集群网络:指定以分离 OSD 复制和心跳流量。此设置为可选。
Using a physically separated network is recommended, as it will relieve the Ceph public and the virtual guests network, while also providing a significant Ceph performance improvements.
建议使用物理隔离的网络,因为这将减轻 Ceph 公共网络和虚拟客户机网络的负担,同时显著提升 Ceph 性能。
The Ceph cluster network can be configured and moved to another physically separated network at a later time.
Ceph 集群网络可以配置,并在以后迁移到另一个物理隔离的网络。
You have two more options which are considered advanced and therefore should
only changed if you know what you are doing.
您还有两个被视为高级的选项,因此只有在了解相关操作时才应更改。
-
Number of replicas: Defines how often an object is replicated.
副本数量:定义对象被复制的次数。 -
Minimum replicas: Defines the minimum number of required replicas for I/O to be marked as complete.
最小副本数:定义完成 I/O 操作所需的最小副本数量。
Additionally, you need to choose your first monitor node. This step is required.
此外,您需要选择第一个监视节点。此步骤是必需的。
That’s it. You should now see a success page as the last step, with further
instructions on how to proceed. Your system is now ready to start using Ceph.
To get started, you will need to create some additional monitors,
OSDs and at least one pool.
就是这样。您现在应该看到一个成功页面作为最后一步,其中包含下一步操作的说明。您的系统现在已准备好开始使用 Ceph。要开始使用,您需要创建一些额外的监视器、OSD 以及至少一个存储池。
The rest of this chapter will guide you through getting the most out of
your Proxmox VE based Ceph setup. This includes the aforementioned tips and
more, such as CephFS, which is a helpful addition to your
new Ceph cluster.
本章剩余部分将指导您如何充分利用基于 Proxmox VE 的 Ceph 设置。这包括前面提到的技巧以及更多内容,例如 CephFS,它是您新 Ceph 集群的一个有用补充。
8.4.2. CLI Installation of Ceph Packages
8.4.2. 使用命令行安装 Ceph 包
Alternatively to the the recommended Proxmox VE Ceph installation wizard available
in the web interface, you can use the following CLI command on each node:
作为替代,除了在网页界面中推荐的 Proxmox VE Ceph 安装向导外,您还可以在每个节点上使用以下命令行命令:
pveceph install
This sets up an apt package repository in
/etc/apt/sources.list.d/ceph.list and installs the required software.
该命令会在 /etc/apt/sources.list.d/ceph.list 中设置一个 apt 包代码仓库,并安装所需的软件。
8.4.3. Initial Ceph configuration via CLI
8.4.3. 通过命令行界面进行初始 Ceph 配置
Use the Proxmox VE Ceph installation wizard (recommended) or run the
following command on one node:
使用 Proxmox VE Ceph 安装向导(推荐)或在一个节点上运行以下命令:
pveceph init --network 10.10.10.0/24
This creates an initial configuration at /etc/pve/ceph.conf with a
dedicated network for Ceph. This file is automatically distributed to
all Proxmox VE nodes, using pmxcfs. The command also
creates a symbolic link at /etc/ceph/ceph.conf, which points to that file.
Thus, you can simply run Ceph commands without the need to specify a
configuration file.
这将在 /etc/pve/ceph.conf 创建一个初始配置文件,并为 Ceph 设置专用网络。该文件会通过 pmxcfs 自动分发到所有 Proxmox VE 节点。该命令还会在 /etc/ceph/ceph.conf 创建一个符号链接,指向该文件。因此,您可以直接运行 Ceph 命令,无需指定配置文件。
8.5. Ceph Monitor 8.5. Ceph 监视器
The Ceph Monitor (MON)
[19]
maintains a master copy of the cluster map. For high availability, you need at
least 3 monitors. One monitor will already be installed if you
used the installation wizard. You won’t need more than 3 monitors, as long
as your cluster is small to medium-sized. Only really large clusters will
require more than this.
Ceph 监视器(MON)[19] 维护集群映射的主副本。为了实现高可用性,您需要至少 3 个监视器。如果您使用安装向导,系统会自动安装一个监视器。只要您的集群规模为小型到中型,就不需要超过 3 个监视器。只有非常大型的集群才需要更多监视器。
8.5.1. Create Monitors 8.5.1. 创建监视器
On each node where you want to place a monitor (three monitors are recommended),
create one by using the Ceph → Monitor tab in the GUI or run:
在每个您想放置监视器的节点上(建议三个监视器),可以通过 GUI 中的 Ceph → 监视器 标签创建,或者运行:
pveceph mon create
8.5.2. Destroy Monitors 8.5.2. 销毁监视器
To remove a Ceph Monitor via the GUI, first select a node in the tree view and
go to the Ceph → Monitor panel. Select the MON and click the Destroy
button.
要通过 GUI 移除 Ceph 监视器,首先在树视图中选择一个节点,然后进入 Ceph → 监视器面板。选择监视器并点击销毁按钮。
To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
is running. Then execute the following command:
要通过 CLI 移除 Ceph 监视器,首先连接到运行该监视器的节点。然后执行以下命令:
pveceph mon destroy
|
|
At least three Monitors are needed for quorum. 至少需要三个监视器以达成法定人数。 |
8.6. Ceph Manager 8.6. Ceph 管理器
The Manager daemon runs alongside the monitors. It provides an interface to
monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
[20] daemon is
required.
Manager 守护进程与监视器一起运行。它提供了一个接口来监控集群。自 Ceph luminous 版本发布以来,至少需要一个 ceph-mgr [20] 守护进程。
8.6.1. Create Manager 8.6.1. 创建 Manager
Multiple Managers can be installed, but only one Manager is active at any given
time.
可以安装多个 Manager,但任何时候只有一个 Manager 处于活动状态。
pveceph mgr create
|
|
It is recommended to install the Ceph Manager on the monitor nodes. For
high availability install more then one manager. 建议在监视器节点上安装 Ceph Manager。为了实现高可用性,安装多个 Manager。 |
8.6.2. Destroy Manager 8.6.2. 销毁管理器
To remove a Ceph Manager via the GUI, first select a node in the tree view and
go to the Ceph → Monitor panel. Select the Manager and click the
Destroy button.
要通过 GUI 移除 Ceph 管理器,首先在树视图中选择一个节点,然后进入 Ceph → 监视器面板。选择管理器并点击销毁按钮。
To remove a Ceph Monitor via the CLI, first connect to the node on which the
Manager is running. Then execute the following command:
要通过 CLI 移除 Ceph 管理器,首先连接到运行该管理器的节点。然后执行以下命令:
pveceph mgr destroy
|
|
While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
as it handles important features like PG-autoscaling, device health monitoring,
telemetry and more. 虽然管理器不是硬依赖,但它对 Ceph 集群至关重要,因为它负责处理诸如 PG 自动扩展、设备健康监控、遥测等重要功能。 |
8.7. Ceph OSDs 8.7. Ceph OSD
Ceph Object Storage Daemons store objects for Ceph over the
network. It is recommended to use one OSD per physical disk.
Ceph 对象存储守护进程通过网络存储 Ceph 的对象。建议每个物理磁盘使用一个 OSD。
8.7.1. Create OSDs 8.7.1. 创建 OSD
You can create an OSD either via the Proxmox VE web interface or via the CLI using
pveceph. For example:
您可以通过 Proxmox VE 网页界面或使用 pveceph 命令行工具创建 OSD。例如:
pveceph osd create /dev/sd[X]
|
|
We recommend a Ceph cluster with at least three nodes and at least 12
OSDs, evenly distributed among the nodes. 我们建议使用至少三个节点且至少 12 个 OSD 的 Ceph 集群,OSD 应均匀分布在各节点之间。 |
If the disk was in use before (for example, for ZFS or as an OSD) you first need
to zap all traces of that usage. To remove the partition table, boot sector and
any other OSD leftover, you can use the following command:
如果磁盘之前被使用过(例如,用于 ZFS 或作为 OSD),您首先需要清除所有使用痕迹。要删除分区表、引导扇区以及任何其他 OSD 残留,可以使用以下命令:
ceph-volume lvm zap /dev/sd[X] --destroy
|
|
The above command will destroy all data on the disk! 上述命令将销毁磁盘上的所有数据! |
Starting with the Ceph Kraken release, a new Ceph OSD storage type was
introduced called Bluestore
[21].
This is the default when creating OSDs since Ceph Luminous.
从 Ceph Kraken 版本开始,引入了一种新的 Ceph OSD 存储类型,称为 Bluestore [21]。自 Ceph Luminous 以来,创建 OSD 时默认使用该类型。
pveceph osd create /dev/sd[X]
If you want to use a separate DB/WAL device for your OSDs, you can specify it
through the -db_dev and -wal_dev options. The WAL is placed with the DB, if
not specified separately.
如果您想为 OSD 使用单独的 DB/WAL 设备,可以通过-db_dev 和-wal_dev 选项进行指定。如果未单独指定,WAL 将与 DB 放置在一起。
pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
You can directly choose the size of those with the -db_size and -wal_size
parameters respectively. If they are not given, the following values (in order)
will be used:
您可以分别通过-db_size 和-wal_size 参数直接选择它们的大小。如果未指定,将依次使用以下值:
-
bluestore_block_{db,wal}_size from Ceph configuration…
bluestore_block_{db,wal}_size 来自 Ceph 配置…-
… database, section osd
… 数据库,osd 部分 -
… database, section global
… 数据库,全局部分 -
… file, section osd
… 文件,osd 部分 -
… file, section global
… 文件,global 部分
-
-
10% (DB)/1% (WAL) of OSD size
OSD 大小的 10%(DB)/1%(WAL)
|
|
The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
internal journal or write-ahead log. It is recommended to use a fast SSD or
NVRAM for better performance. DB 存储 BlueStore 的内部元数据,WAL 是 BlueStore 的内部日志或预写日志。建议使用高速 SSD 或 NVRAM 以获得更好的性能。 |
Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
Starting with Ceph Nautilus, Proxmox VE does not support creating such OSDs with
pveceph anymore. If you still want to create filestore OSDs, use
ceph-volume directly.
在 Ceph Luminous 之前,Filestore 被用作 Ceph OSD 的默认存储类型。从 Ceph Nautilus 开始,Proxmox VE 不再支持使用 pveceph 创建此类 OSD。如果您仍想创建 filestore OSD,请直接使用 ceph-volume。
ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
8.7.2. Destroy OSDs 8.7.2. 销毁 OSD
If you experience problems with an OSD or its disk, try to
troubleshoot them first to decide if a
replacement is needed.
如果您遇到 OSD 或其磁盘的问题,请先尝试排查,以决定是否需要更换。
To destroy an OSD, navigate to the <Node> → Ceph → OSD panel or use the
mentioned CLI commands on the node where the OSD is located.
要销毁 OSD,请导航到 <节点> → Ceph → OSD 面板,或在 OSD 所在的节点上使用上述 CLI 命令。
-
Make sure the cluster has enough space to handle the removal of the OSD. In the Ceph → OSD panel,if the to-be destroyed OSD is still up and in (non-zero value at AVAIL), make sure that all OSDs have their Used (%) value well below the nearfull_ratio of default 85%.
确保集群有足够的空间来处理 OSD 的移除。在 Ceph → OSD 面板中,如果待销毁的 OSD 仍然处于运行状态且处于“in”状态(AVAIL 值非零),请确保所有 OSD 的已用容量百分比(Used (%))远低于默认的 nearfull_ratio 85%。This way you can reduce the risk from the upcoming rebalancing, which may cause OSDs to run full and thereby blocking I/O on Ceph pools.
这样可以降低即将进行的重新平衡带来的风险,因为重新平衡可能导致 OSD 容量满载,从而阻塞 Ceph 池的 I/O 操作。Use the following command to get the same information on the CLI:
使用以下命令可以在命令行界面获取相同的信息:ceph osd df tree
-
If the to-be destroyed OSD is not out yet, select the OSD and click on Out. This will exclude it from data distribution and start a rebalance.
如果待销毁的 OSD 尚未被标记为“out”,请选择该 OSD 并点击“Out”。这将把它从数据分布中排除,并开始重新平衡。The following command does the same:
以下命令执行相同操作:ceph osd out <id>
-
If you can, wait until Ceph has finished the rebalance to always have enough replicas. The OSD will be empty; once it is, it will show 0 PGs.
如果可能,请等待 Ceph 完成重新平衡,以确保始终有足够的副本。OSD 将为空;一旦为空,它将显示 0 个 PG。 -
Click on Stop. If stopping is not safe yet, a warning will appear, and you should click on Cancel. Try it again in a few moments.
点击停止。如果尚未安全停止,将出现警告,您应点击取消。稍后再试。The following commands can be used to check if it is safe to stop and stop the OSD:
以下命令可用于检查是否可以安全停止并停止 OSD:ceph osd ok-to-stop <id> pveceph stop --service osd.<id>
-
Finally: 最后:
To remove the OSD from Ceph and delete all disk data, first click on More → Destroy. Enable the cleanup option to clean up the partition table and other structures. This makes it possible to immediately reuse the disk in Proxmox VE. Then, click on Remove.
要从 Ceph 中移除 OSD 并删除所有磁盘数据,首先点击 更多 → 销毁。启用清理选项以清理分区表和其他结构。这样可以立即在 Proxmox VE 中重新使用该磁盘。然后,点击 移除。The CLI command to destroy the OSD is:
销毁 OSD 的命令行命令是:pveceph osd destroy <id> [--cleanup]
8.8. Ceph Pools 8.8. Ceph 池
A pool is a logical group for storing objects. It holds a collection of objects,
known as Placement Groups (PG, pg_num).
池是用于存储对象的逻辑组。它包含一组称为放置组(Placement Groups,PG,pg_num)的对象。
8.8.1. Create and Edit Pools
8.8.1. 创建和编辑池
You can create and edit pools from the command line or the web interface of any
Proxmox VE host under Ceph → Pools.
您可以通过命令行或任何 Proxmox VE 主机的 Web 界面,在 Ceph → Pools 下创建和编辑池。
When no options are given, we set a default of 128 PGs, a size of 3
replicas and a min_size of 2 replicas, to ensure no data loss occurs if
any OSD fails.
当未指定选项时,我们默认设置 128 个 PG,副本大小为 3,最小副本数为 2,以确保任何 OSD 故障时不会发生数据丢失。
|
|
Do not set a min_size of 1. A replicated pool with min_size of 1
allows I/O on an object when it has only 1 replica, which could lead to data
loss, incomplete PGs or unfound objects. 不要将 min_size 设置为 1。min_size 为 1 的复制池允许在对象只有 1 个副本时进行 I/O,这可能导致数据丢失、不完整的 PG 或找不到对象。 |
It is advised that you either enable the PG-Autoscaler or calculate the PG
number based on your setup. You can find the formula and the PG calculator
[22] online. From Ceph Nautilus
onward, you can change the number of PGs
[23] after the setup.
建议您启用 PG 自动扩展器,或根据您的设置计算 PG 数量。您可以在网上找到公式和 PG 计算器[22]。从 Ceph Nautilus 版本开始,您可以在设置后更改 PG 数量[23]。
The PG autoscaler [24] can
automatically scale the PG count for a pool in the background. Setting the
Target Size or Target Ratio advanced parameters helps the PG-Autoscaler to
make better decisions.
PG 自动扩展器[24] 可以在后台自动调整池的 PG 数量。设置目标大小或目标比例高级参数有助于 PG 自动扩展器做出更好的决策。
通过 CLI 创建池的示例
pveceph pool create <pool-name> --add_storages
|
|
If you would also like to automatically define a storage for your
pool, keep the ‘Add as Storage’ checkbox checked in the web interface, or use the
command-line option --add_storages at pool creation. 如果您还想为您的资源池自动定义存储,请在网页界面中保持“添加为存储”复选框选中,或在创建资源池时使用命令行选项 --add_storages。 |
Pool Options 资源池选项
The following options are available on pool creation, and partially also when
editing a pool.
以下选项可在创建资源池时使用,部分选项也可在编辑资源池时使用。
- Name 名称
-
The name of the pool. This must be unique and can’t be changed afterwards.
存储池的名称。此名称必须唯一,且之后无法更改。 - Size 大小
-
The number of replicas per object. Ceph always tries to have this many copies of an object. Default: 3.
每个对象的副本数量。Ceph 始终尝试保持此数量的对象副本。默认值:3。 - PG Autoscale Mode PG 自动缩放模式
-
The automatic PG scaling mode [24] of the pool. If set to warn, it produces a warning message when a pool has a non-optimal PG count. Default: warn.
池的自动 PG 缩放模式[24]。如果设置为 warn,当池的 PG 数量不理想时会产生警告信息。默认值:warn。 - Add as Storage 添加为存储
-
Configure a VM or container storage using the new pool. Default: true (only visible on creation).
使用新池配置虚拟机或容器存储。默认值:true(仅在创建时可见)。
- Min. Size 最小大小
-
The minimum number of replicas per object. Ceph will reject I/O on the pool if a PG has less than this many replicas. Default: 2.
每个对象的最小副本数。如果某个 PG 的副本数少于此值,Ceph 将拒绝该池的 I/O 操作。默认值:2。 - Crush Rule Crush 规则
-
The rule to use for mapping object placement in the cluster. These rules define how data is placed within the cluster. See Ceph CRUSH & device classes for information on device-based rules.
用于映射集群中对象放置的规则。这些规则定义了数据在集群中的放置方式。有关基于设备的规则信息,请参见 Ceph CRUSH 和设备类别。 - # of PGs PG 数量
-
The number of placement groups [23] that the pool should have at the beginning. Default: 128.
池在开始时应拥有的放置组(placement groups)数量[23]。默认值:128。 - Target Ratio 目标比例
-
The ratio of data that is expected in the pool. The PG autoscaler uses the ratio relative to other ratio sets. It takes precedence over the target size if both are set.
池中预期的数据比例。PG 自动调整器使用相对于其他比例集的比例。如果同时设置了目标大小和目标比例,则目标比例优先。 - Target Size 目标大小
-
The estimated amount of data expected in the pool. The PG autoscaler uses this size to estimate the optimal PG count.
预期池中数据的估计量。PG 自动扩展器使用此大小来估算最佳的 PG 数量。 -
Min. # of PGs
最小 PG 数量 -
The minimum number of placement groups. This setting is used to fine-tune the lower bound of the PG count for that pool. The PG autoscaler will not merge PGs below this threshold.
最小的放置组数量。此设置用于微调该池 PG 数量的下限。PG 自动扩展器不会将 PG 合并到低于此阈值的数量。
Further information on Ceph pool handling can be found in the Ceph pool
operation [25]
manual.
有关 Ceph 池处理的更多信息,请参阅 Ceph 池操作[25]手册。
8.8.2. Erasure Coded Pools
8.8.2. 擦除编码池
Erasure coding (EC) is a form of ‘forward error correction’ codes that allows
to recover from a certain amount of data loss. Erasure coded pools can offer
more usable space compared to replicated pools, but they do that for the price
of performance.
擦除编码(EC)是一种“前向纠错”代码,允许从一定量的数据丢失中恢复。与复制池相比,擦除编码池可以提供更多的可用空间,但这是以性能为代价的。
For comparison: in classic, replicated pools, multiple replicas of the data
are stored (size) while in erasure coded pool, data is split into k data
chunks with additional m coding (checking) chunks. Those coding chunks can be
used to recreate data should data chunks be missing.
作为比较:在经典的复制池中,存储了数据的多个副本(大小),而在擦除编码池中,数据被分成 k 个数据块和额外的 m 个编码(校验)块。如果数据块丢失,这些编码块可以用来重建数据。
The number of coding chunks, m, defines how many OSDs can be lost without
losing any data. The total amount of objects stored is k + m.
编码块的数量 m 定义了可以丢失多少个 OSD 而不丢失任何数据。存储的对象总数为 k + m。
Creating EC Pools 创建 EC 池
Erasure coded (EC) pools can be created with the pveceph CLI tooling.
Planning an EC pool needs to account for the fact, that they work differently
than replicated pools.
可以使用 pveceph CLI 工具创建纠删码(EC)池。规划 EC 池时需要考虑它们的工作方式与复制池不同。
The default min_size of an EC pool depends on the m parameter. If m = 1,
the min_size of the EC pool will be k. The min_size will be k + 1 if
m > 1. The Ceph documentation recommends a conservative min_size of k + 2
[26].
EC 池的默认 min_size 取决于参数 m。如果 m = 1,EC 池的 min_size 将是 k。如果 m > 1,min_size 将是 k + 1。Ceph 文档建议保守的 min_size 为 k + 2 [26]。
If there are less than min_size OSDs available, any IO to the pool will be
blocked until there are enough OSDs available again.
如果可用的 OSD 少于 min_size,任何对池的 IO 操作将被阻塞,直到有足够的 OSD 再次可用。
|
|
When planning an erasure coded pool, keep an eye on the min_size as it
defines how many OSDs need to be available. Otherwise, IO will be blocked. 在规划纠删码池时,要注意 min_size,因为它定义了需要多少个 OSD 可用。否则,IO 将被阻塞。 |
For example, an EC pool with k = 2 and m = 1 will have size = 3,
min_size = 2 and will stay operational if one OSD fails. If the pool is
configured with k = 2, m = 2, it will have a size = 4 and min_size = 3
and stay operational if one OSD is lost.
例如,一个 k = 2,m = 1 的 EC 池将有 size = 3,min_size = 2,并且在一个 OSD 故障时仍能保持运行。如果池配置为 k = 2,m = 2,则 size = 4,min_size = 3,并且在丢失一个 OSD 时仍能保持运行。
To create a new EC pool, run the following command:
要创建一个新的 EC 池,请运行以下命令:
pveceph pool create <pool-name> --erasure-coding k=2,m=1
Optional parameters are failure-domain and device-class. If you
need to change any EC profile settings used by the pool, you will have to
create a new pool with a new profile.
可选参数为 failure-domain 和 device-class。如果需要更改池使用的任何 EC 配置文件设置,则必须创建一个带有新配置文件的新池。
This will create a new EC pool plus the needed replicated pool to store the RBD
omap and other metadata. In the end, there will be a <pool name>-data and
<pool name>-metadata pool. The default behavior is to create a matching storage
configuration as well. If that behavior is not wanted, you can disable it by
providing the --add_storages 0 parameter. When configuring the storage
configuration manually, keep in mind that the data-pool parameter needs to be
set. Only then will the EC pool be used to store the data objects. For example:
这将创建一个新的 EC 池以及存储 RBD omap 和其他元数据所需的复制池。最终,将有一个 <pool name>-data 和一个 <pool name>-metadata 池。默认行为是同时创建一个匹配的存储配置。如果不需要该行为,可以通过提供 --add_storages 0 参数来禁用它。在手动配置存储配置时,请记住需要设置 data-pool 参数。只有这样,EC 池才会被用来存储数据对象。例如:
|
|
The optional parameters --size, --min_size and --crush_rule will be
used for the replicated metadata pool, but not for the erasure coded data pool.
If you need to change the min_size on the data pool, you can do it later.
The size and crush_rule parameters cannot be changed on erasure coded
pools. 可选参数 --size、--min_size 和 --crush_rule 将用于复制的元数据池,但不会用于纠删码数据池。如果需要更改数据池上的 min_size,可以稍后进行。size 和 crush_rule 参数不能在纠删码池上更改。 |
If there is a need to further customize the EC profile, you can do so by
creating it with the Ceph tools directly [27], and
specify the profile to use with the profile parameter.
如果需要进一步自定义 EC 配置文件,可以直接使用 Ceph 工具创建它 [27],并通过 profile 参数指定要使用的配置文件。
For example: 例如:
pveceph pool create <pool-name> --erasure-coding profile=<profile-name>
Adding EC Pools as Storage
添加 EC 池作为存储
You can add an already existing EC pool as storage to Proxmox VE. It works the same
way as adding an RBD pool but requires the extra data-pool option.
您可以将已存在的 EC 池作为存储添加到 Proxmox VE 中。其操作方式与添加 RBD 池相同,但需要额外的 data-pool 选项。
pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
|
|
Do not forget to add the keyring and monhost option for any external
Ceph clusters, not managed by the local Proxmox VE cluster. 请勿忘记为任何非本地 Proxmox VE 集群管理的外部 Ceph 集群添加 keyring 和 monhost 选项。 |
8.8.3. Destroy Pools 8.8.3. 销毁存储池
To destroy a pool via the GUI, select a node in the tree view and go to the
Ceph → Pools panel. Select the pool to destroy and click the Destroy
button. To confirm the destruction of the pool, you need to enter the pool name.
要通过图形界面销毁存储池,请在树视图中选择一个节点,进入 Ceph → Pools 面板。选择要销毁的存储池,然后点击销毁按钮。确认销毁存储池时,需要输入存储池名称。
Run the following command to destroy a pool. Specify the -remove_storages to
also remove the associated storage.
运行以下命令销毁存储池。指定 -remove_storages 参数可同时删除相关存储。
pveceph pool destroy <name>
|
|
Pool deletion runs in the background and can take some time.
You will notice the data usage in the cluster decreasing throughout this
process. 存储池删除在后台运行,可能需要一些时间。在此过程中,您会注意到集群中的数据使用量逐渐减少。 |
8.8.4. PG Autoscaler 8.8.4. PG 自动扩展器
The PG autoscaler allows the cluster to consider the amount of (expected) data
stored in each pool and to choose the appropriate pg_num values automatically.
It is available since Ceph Nautilus.
PG 自动扩展器允许集群根据每个池中存储的(预期)数据量,自动选择合适的 pg_num 值。该功能自 Ceph Nautilus 版本起可用。
You may need to activate the PG autoscaler module before adjustments can take
effect.
您可能需要先激活 PG 自动扩展器模块,调整才能生效。
ceph mgr module enable pg_autoscalerThe autoscaler is configured on a per pool basis and has the following modes:
自动扩展器按每个池进行配置,具有以下模式:
|
warn
警告 |
A health warning is issued if the suggested pg_num value differs too
much from the current value.
|
|
on
开启 |
The pg_num is adjusted automatically with no need for any manual
interaction.
|
|
off
关闭 |
No automatic pg_num adjustments are made, and no warning will be issued
if the PG count is not optimal.
|
The scaling factor can be adjusted to facilitate future data storage with the
target_size, target_size_ratio and the pg_num_min options.
可以调整缩放因子,以便通过 target_size、target_size_ratio 和 pg_num_min 选项来促进未来的数据存储。
|
|
By default, the autoscaler considers tuning the PG count of a pool if
it is off by a factor of 3. This will lead to a considerable shift in data
placement and might introduce a high load on the cluster. 默认情况下,如果池的 PG 数量偏差达到 3 倍,自动调整器会考虑调整 PG 数量。这将导致数据放置发生较大变化,并可能给集群带来较高负载。 |
You can find a more in-depth introduction to the PG autoscaler on Ceph’s Blog -
New in
Nautilus: PG merging and autotuning.
您可以在 Ceph 的博客中找到关于 PG 自动扩展器的更深入介绍——Nautilus 新特性:PG 合并和自动调优。
8.9. Ceph CRUSH & Device Classes
8.9. Ceph CRUSH 与设备类别
The [28] (Controlled
Replication Under Scalable Hashing) algorithm is at the
foundation of Ceph.
[28](可控复制下的可扩展哈希)算法是 Ceph 的基础。
CRUSH calculates where to store and retrieve data from. This has the
advantage that no central indexing service is needed. CRUSH works using a map of
OSDs, buckets (device locations) and rulesets (data replication) for pools.
CRUSH 计算数据的存储和检索位置。这样做的优点是无需中央索引服务。CRUSH 通过 OSD、桶(设备位置)和规则集(数据复制)映射来工作,用于池的管理。
|
|
Further information can be found in the Ceph documentation, under the
section CRUSH map [29]. 更多信息可以在 Ceph 文档中找到,位于 CRUSH 地图部分 [29]。 |
This map can be altered to reflect different replication hierarchies. The object
replicas can be separated (e.g., failure domains), while maintaining the desired
distribution.
该地图可以被修改以反映不同的复制层级。对象副本可以被分开(例如,故障域),同时保持所需的分布。
A common configuration is to use different classes of disks for different Ceph
pools. For this reason, Ceph introduced device classes with luminous, to
accommodate the need for easy ruleset generation.
一种常见的配置是为不同的 Ceph 池使用不同类别的磁盘。基于此,Ceph 在 luminous 版本中引入了设备类别,以满足轻松生成规则集的需求。
The device classes can be seen in the ceph osd tree output. These classes
represent their own root bucket, which can be seen with the below command.
设备类别可以在 ceph osd tree 输出中看到。这些类别代表它们自己的根桶,可以通过以下命令查看。
ceph osd crush tree --show-shadow
Example output form the above command:
上述命令的示例输出:
ID CLASS WEIGHT TYPE NAME -16 nvme 2.18307 root default~nvme -13 nvme 0.72769 host sumi1~nvme 12 nvme 0.72769 osd.12 -14 nvme 0.72769 host sumi2~nvme 13 nvme 0.72769 osd.13 -15 nvme 0.72769 host sumi3~nvme 14 nvme 0.72769 osd.14 -1 7.70544 root default -3 2.56848 host sumi1 12 nvme 0.72769 osd.12 -5 2.56848 host sumi2 13 nvme 0.72769 osd.13 -7 2.56848 host sumi3 14 nvme 0.72769 osd.14
To instruct a pool to only distribute objects on a specific device class, you
first need to create a ruleset for the device class:
要指示一个池仅在特定设备类上分发对象,首先需要为该设备类创建一个规则集:
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
<rule-name> |
name of the rule, to connect with a pool (seen in GUI & CLI) |
<root> |
which crush root it should belong to (default Ceph root "default") |
<failure-domain> |
at which failure-domain the objects should be distributed (usually host) |
<class> |
what type of OSD backing store to use (e.g., nvme, ssd, hdd) |
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
一旦规则被添加到 CRUSH 映射中,你可以告诉一个池使用该规则集。
ceph osd pool set <pool-name> crush_rule <rule-name>
|
|
If the pool already contains objects, these must be moved accordingly.
Depending on your setup, this may introduce a big performance impact on your
cluster. As an alternative, you can create a new pool and move disks separately. 如果池中已经包含对象,则必须相应地移动这些对象。根据你的设置,这可能会对集群性能产生较大影响。作为替代方案,你可以创建一个新池并单独移动磁盘。 |
8.10. Ceph Client 8.10. Ceph 客户端
Following the setup from the previous sections, you can configure Proxmox VE to use
such pools to store VM and Container images. Simply use the GUI to add a new
RBD storage (see section
Ceph RADOS Block Devices (RBD)).
按照前面章节的设置,您可以配置 Proxmox VE 使用此类池来存储虚拟机和容器镜像。只需使用图形界面添加一个新的 RBD 存储(参见章节 Ceph RADOS 块设备(RBD))。
You also need to copy the keyring to a predefined location for an external Ceph
cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
done automatically.
您还需要将密钥环复制到外部 Ceph 集群的预定义位置。如果 Ceph 安装在 Proxmox 节点本身,则此操作会自动完成。
|
|
The filename needs to be <storage_id> + `.keyring, where <storage_id> is
the expression after rbd: in /etc/pve/storage.cfg. In the following example,
my-ceph-storage is the <storage_id>: 文件名需要是 <storage_id> + `.keyring`,其中 <storage_id> 是 /etc/pve/storage.cfg 中 rbd: 后面的表达式。在以下示例中,my-ceph-storage 是 <storage_id>: |
mkdir /etc/pve/priv/ceph cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
8.11. CephFS
Ceph also provides a filesystem, which runs on top of the same object storage as
RADOS block devices do. A Metadata Server (MDS) is used to map the
RADOS backed objects to files and directories, allowing Ceph to provide a
POSIX-compliant, replicated filesystem. This allows you to easily configure a
clustered, highly available, shared filesystem. Ceph’s Metadata Servers
guarantee that files are evenly distributed over the entire Ceph cluster. As a
result, even cases of high load will not overwhelm a single host, which can be
an issue with traditional shared filesystem approaches, for example NFS.
Ceph 还提供了一个文件系统,该文件系统运行在与 RADOS 块设备相同的对象存储之上。元数据服务器(MDS)用于将基于 RADOS 的对象映射到文件和目录,使 Ceph 能够提供一个符合 POSIX 标准的、可复制的文件系统。这使您能够轻松配置一个集群化、高可用的共享文件系统。Ceph 的元数据服务器保证文件均匀分布在整个 Ceph 集群中。因此,即使在高负载情况下,也不会使单个主机不堪重负,这在传统的共享文件系统方法(例如 NFS)中可能是一个问题。
Proxmox VE supports both creating a hyper-converged CephFS and using an existing
CephFS as storage to save backups, ISO files, and container
templates.
Proxmox VE 支持创建超融合的 CephFS,也支持使用现有的 CephFS 作为存储来保存备份、ISO 文件和容器模板。
8.11.1. Metadata Server (MDS)
8.11.1. 元数据服务器(MDS)
CephFS needs at least one Metadata Server to be configured and running, in order
to function. You can create an MDS through the Proxmox VE web GUI’s Node
-> CephFS panel or from the command line with:
CephFS 需要至少配置并运行一个元数据服务器(Metadata Server)才能正常工作。您可以通过 Proxmox VE 网页 GUI 的 节点 -> CephFS 面板创建 MDS,或者通过命令行创建:
pveceph mds create
Multiple metadata servers can be created in a cluster, but with the default
settings, only one can be active at a time. If an MDS or its node becomes
unresponsive (or crashes), another standby MDS will get promoted to active.
You can speed up the handover between the active and standby MDS by using
the hotstandby parameter option on creation, or if you have already created it
you may set/add:
集群中可以创建多个元数据服务器,但在默认设置下,任意时刻只有一个可以处于活动状态。如果某个 MDS 或其节点变得无响应(或崩溃),另一个备用 MDS 将被提升为活动状态。您可以通过在创建时使用 hotstandby 参数选项来加快活动和备用 MDS 之间的切换,或者如果已经创建,可以在 /etc/pve/ceph.conf 的相应 MDS 部分设置/添加:
mds standby replay = true
in the respective MDS section of /etc/pve/ceph.conf. With this enabled, the
specified MDS will remain in a warm state, polling the active one, so that it
can take over faster in case of any issues.
启用此功能后,指定的 MDS 将保持在预热状态,轮询活动 MDS,以便在出现问题时能够更快地接管。
|
|
This active polling will have an additional performance impact on your
system and the active MDS. 这种主动轮询会对您的系统和活动 MDS 产生额外的性能影响。 |
Since Luminous (12.2.x) you can have multiple active metadata servers
running at once, but this is normally only useful if you have a high amount of
clients running in parallel. Otherwise the MDS is rarely the bottleneck in a
system. If you want to set this up, please refer to the Ceph documentation.
[30]
自 Luminous(12.2.x)版本起,您可以同时运行多个活动的元数据服务器,但这通常仅在有大量客户端并行运行时才有用。否则,MDS 很少成为系统的瓶颈。如果您想设置此功能,请参阅 Ceph 文档。[30]
8.11.2. Create CephFS 8.11.2. 创建 CephFS
With Proxmox VE’s integration of CephFS, you can easily create a CephFS using the
web interface, CLI or an external API interface. Some prerequisites are required
for this to work:
通过 Proxmox VE 对 CephFS 的集成,您可以轻松地使用网页界面、命令行界面或外部 API 接口创建 CephFS。要实现此功能,需要满足一些前提条件:
成功设置 CephFS 的先决条件:
-
Install Ceph packages - if this was already done some time ago, you may want to rerun it on an up-to-date system to ensure that all CephFS related packages get installed.
安装 Ceph 软件包——如果之前已经安装过,建议在系统更新后重新运行,以确保所有与 CephFS 相关的软件包都已安装。
After this is complete, you can simply create a CephFS through
either the Web GUI’s Node -> CephFS panel or the command-line tool pveceph,
for example:
完成此步骤后,您可以通过 Web GUI 的节点 -> CephFS 面板或命令行工具 pveceph 来创建 CephFS,例如:
pveceph fs create --pg_num 128 --add-storageThis creates a CephFS named cephfs, using a pool for its data named
cephfs_data with 128 placement groups and a pool for its metadata named
cephfs_metadata with one quarter of the data pool’s placement groups (32).
Check the Proxmox VE managed Ceph pool chapter or visit the
Ceph documentation for more information regarding an appropriate placement group
number (pg_num) for your setup [23].
Additionally, the --add-storage parameter will add the CephFS to the Proxmox VE
storage configuration after it has been created successfully.
这将创建一个名为 cephfs 的 CephFS,使用名为 cephfs_data 的数据池,包含 128 个放置组,以及一个名为 cephfs_metadata 的元数据池,其放置组数量为数据池的四分之一(32 个)。有关适合您设置的放置组数量(pg_num)的更多信息,请查看 Proxmox VE 管理的 Ceph 池章节或访问 Ceph 文档[23]。此外,--add-storage 参数将在 CephFS 成功创建后将其添加到 Proxmox VE 存储配置中。
8.11.3. Destroy CephFS 8.11.3. 销毁 CephFS
|
|
Destroying a CephFS will render all of its data unusable. This cannot be
undone! 销毁 CephFS 将导致其所有数据无法使用。此操作不可撤销! |
To completely and gracefully remove a CephFS, the following steps are
necessary:
要完全且优雅地移除 CephFS,需要执行以下步骤:
-
Disconnect every non-Proxmox VE client (e.g. unmount the CephFS in guests).
断开所有非 Proxmox VE 客户端的连接(例如,在虚拟机中卸载 CephFS)。 -
Disable all related CephFS Proxmox VE storage entries (to prevent it from being automatically mounted).
禁用所有相关的 CephFS Proxmox VE 存储条目(以防止其被自动挂载)。 -
Remove all used resources from guests (e.g. ISOs) that are on the CephFS you want to destroy.
从客机中移除所有使用中的资源(例如位于您想要销毁的 CephFS 上的 ISO 文件)。 -
Unmount the CephFS storages on all cluster nodes manually with
在所有集群节点上手动卸载 CephFS 存储,命令如下:umount /mnt/pve/<STORAGE-NAME>
Where <STORAGE-NAME> is the name of the CephFS storage in your Proxmox VE.
其中 <STORAGE-NAME> 是您 Proxmox VE 中 CephFS 存储的名称。 -
Now make sure that no metadata server (MDS) is running for that CephFS, either by stopping or destroying them. This can be done through the web interface or via the command-line interface, for the latter you would issue the following command:
现在确保没有元数据服务器(MDS)正在运行该 CephFS,可以通过停止或销毁它们来实现。此操作可以通过网页界面完成,也可以通过命令行界面完成,后者可执行以下命令:pveceph stop --service mds.NAMEto stop them, or 停止它们,或者
pveceph mds destroy NAME
to destroy them. 销毁它们。
Note that standby servers will automatically be promoted to active when an active MDS is stopped or removed, so it is best to first stop all standby servers.
请注意,当一个活动的 MDS 被停止或移除时,备用服务器会自动提升为活动状态,因此最好先停止所有备用服务器。 -
Now you can destroy the CephFS with
现在你可以使用 销毁 CephFS 了。pveceph fs destroy NAME --remove-storages --remove-pools
This will automatically destroy the underlying Ceph pools as well as remove the storages from pve config.
这将自动销毁底层的 Ceph 池,并从 pve 配置中移除存储。
After these steps, the CephFS should be completely removed and if you have
other CephFS instances, the stopped metadata servers can be started again
to act as standbys.
完成这些步骤后,CephFS 应该被完全移除,如果你有其他 CephFS 实例,停止的元数据服务器可以重新启动,作为备用服务器。
8.12. Ceph Maintenance 8.12. Ceph 维护
8.12.1. Replace OSDs 8.12.1. 更换 OSDs
With the following steps you can replace the disk of an OSD, which is
one of the most common maintenance tasks in Ceph. If there is a
problem with an OSD while its disk still seems to be healthy, read the
troubleshooting section first.
通过以下步骤,您可以更换 OSD 的磁盘,这是 Ceph 中最常见的维护任务之一。如果 OSD 出现问题但其磁盘看起来仍然健康,请先阅读故障排除部分。
-
If the disk failed, get a recommended replacement disk of the same type and size.
如果磁盘损坏,请获取同类型和同容量的推荐替换磁盘。 -
Destroy the OSD in question.
销毁有问题的 OSD。 -
Detach the old disk from the server and attach the new one.
从服务器上卸下旧磁盘并安装新磁盘。 -
Create the OSD again.
重新创建 OSD。 -
After automatic rebalancing, the cluster status should switch back to HEALTH_OK. Any still listed crashes can be acknowledged by running the following command:
自动重新平衡后,集群状态应切换回 HEALTH_OK。任何仍列出的崩溃可以通过运行以下命令来确认:
ceph crash archive-all
8.12.2. Trim/Discard
It is good practice to run fstrim (discard) regularly on VMs and containers.
This releases data blocks that the filesystem isn’t using anymore. It reduces
data usage and resource load. Most modern operating systems issue such discard
commands to their disks regularly. You only need to ensure that the Virtual
Machines enable the disk discard option.
定期在虚拟机和容器上运行 fstrim(discard)是良好的实践。这会释放文件系统不再使用的数据块,减少数据使用和资源负载。大多数现代操作系统会定期向其磁盘发出此类 discard 命令。您只需确保虚拟机启用了磁盘 discard 选项。
8.12.3. Scrub & Deep Scrub
8.12.3. 清理与深度清理
Ceph ensures data integrity by scrubbing placement groups. Ceph checks every
object in a PG for its health. There are two forms of Scrubbing, daily
cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
the objects and uses checksums to ensure data integrity. If a running scrub
interferes with business (performance) needs, you can adjust the time when
scrubs [31]
are executed.
Ceph 通过清理放置组来确保数据完整性。Ceph 会检查放置组中每个对象的健康状况。清理有两种形式:每日的廉价元数据检查和每周的深度数据检查。每周的深度清理会读取对象并使用校验和来确保数据完整性。如果正在运行的清理影响了业务(性能)需求,可以调整清理执行的时间[31]。
8.12.4. Shutdown Proxmox VE + Ceph HCI Cluster
8.12.4. 关闭 Proxmox VE + Ceph HCI 集群
To shut down the whole Proxmox VE + Ceph cluster, first stop all Ceph clients. These
will mainly be VMs and containers. If you have additional clients that might
access a Ceph FS or an installed RADOS GW, stop these as well.
Highly available guests will switch their state to stopped when powered down
via the Proxmox VE tooling.
要关闭整个 Proxmox VE + Ceph 集群,首先停止所有 Ceph 客户端。这些主要是虚拟机和容器。如果还有其他可能访问 Ceph FS 或已安装 RADOS GW 的客户端,也要停止它们。高可用性客户机在通过 Proxmox VE 工具关闭电源时会将其状态切换为已停止。
Once all clients, VMs and containers are off or not accessing the Ceph cluster
anymore, verify that the Ceph cluster is in a healthy state. Either via the Web UI
or the CLI:
一旦所有客户端、虚拟机和容器都关闭或不再访问 Ceph 集群,验证 Ceph 集群处于健康状态。可以通过 Web 界面或命令行界面进行验证:
ceph -s
To disable all self-healing actions, and to pause any client IO in the Ceph
cluster, enable the following OSD flags in the Ceph → OSD panel or via the
CLI:
要禁用所有自我修复操作,并暂停 Ceph 集群中的任何客户端 IO,请在 Ceph → OSD 面板中或通过命令行启用以下 OSD 标志:
ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause
Start powering down your nodes without a monitor (MON). After these nodes are
down, continue by shutting down nodes with monitors on them.
开始关闭没有监视器(MON)的节点。待这些节点关闭后,继续关闭带有监视器的节点。
When powering on the cluster, start the nodes with monitors (MONs) first. Once
all nodes are up and running, confirm that all Ceph services are up and running
before you unset the OSD flags again:
启动集群时,先启动带有监视器(MON)的节点。所有节点启动并运行后,确认所有 Ceph 服务均已启动运行,然后再取消设置 OSD 标志:
ceph osd unset pause ceph osd unset nodown ceph osd unset nobackfill ceph osd unset norebalance ceph osd unset norecover ceph osd unset noout
You can now start up the guests. Highly available guests will change their state
to started when they power on.
您现在可以启动虚拟机。高可用虚拟机在开机时会将其状态更改为已启动。
8.13. Ceph Monitoring and Troubleshooting
8.13. Ceph 监控与故障排除
It is important to continuously monitor the health of a Ceph deployment from the
beginning, either by using the Ceph tools or by accessing
the status through the Proxmox VE API.
从一开始就持续监控 Ceph 部署的健康状况非常重要,可以使用 Ceph 工具或通过 Proxmox VE API 访问状态来实现。
The following Ceph commands can be used to see if the cluster is healthy
(HEALTH_OK), if there are warnings (HEALTH_WARN), or even errors
(HEALTH_ERR). If the cluster is in an unhealthy state, the status commands
below will also give you an overview of the current events and actions to take.
To stop their execution, press CTRL-C.
以下 Ceph 命令可用于查看集群是否健康(HEALTH_OK)、是否有警告(HEALTH_WARN)或甚至错误(HEALTH_ERR)。如果集群处于不健康状态,下面的状态命令还会为您提供当前事件的概览及应采取的措施。要停止执行,请按 CTRL-C。
Continuously watch the cluster status:
持续监视集群状态:
watch ceph --status
Print the cluster status once (not being updated) and continuously append lines of status events:
打印一次集群状态(不更新),并持续追加状态事件行:
ceph --watch
8.13.1. Troubleshooting 8.13.1. 故障排除
This section includes frequently used troubleshooting information.
More information can be found on the official Ceph website under
Troubleshooting
[32].
本节包含常用的故障排除信息。更多信息可在官方 Ceph 网站的故障排除部分找到 [32]。
受影响节点上的相关日志
-
System → System Log or via the CLI, for example of the last 2 days:
系统 → 系统日志,或通过命令行界面查看,例如最近两天的日志:journalctl --since "2 days ago" -
IPMI and RAID controller logs
IPMI 和 RAID 控制器日志
Ceph service crashes can be listed and viewed in detail by running the following
commands:
可以通过运行以下命令列出并详细查看 Ceph 服务崩溃情况:
ceph crash ls ceph crash info <crash_id>
Crashes marked as new can be acknowledged by running:
可以通过运行以下命令确认标记为新的崩溃:
ceph crash archive-all
To get a more detailed view, every Ceph service has a log file under
/var/log/ceph/. If more detail is required, the log level can be
adjusted [33].
为了获得更详细的视图,每个 Ceph 服务在/var/log/ceph/下都有一个日志文件。如果需要更多细节,可以调整日志级别[33]。
Ceph 问题的常见原因
-
Network problems like congestion, a faulty switch, a shut down interface or a blocking firewall. Check whether all Proxmox VE nodes are reliably reachable on the corosync cluster network and on the Ceph public and cluster network.
网络问题,如拥塞、故障交换机、关闭的接口或阻塞的防火墙。检查所有 Proxmox VE 节点是否在 corosync 集群网络以及 Ceph 公共和集群网络上可靠可达。 -
Disk or connection parts which are:
磁盘或连接部件:-
defective 损坏
-
not firmly mounted 未牢固安装
-
lacking I/O performance under higher load (e.g. when using HDDs, consumer hardware or inadvisable RAID controllers)
在较高负载下缺乏 I/O 性能(例如使用 HDD、消费级硬件或不建议使用的 RAID 控制器时)
-
-
Not fulfilling the recommendations for a healthy Ceph cluster.
未满足健康 Ceph 集群的建议要求。
-
- OSDs down/crashed OSD 宕机/崩溃
-
A faulty OSD will be reported as down and mostly (auto) out 10 minutes later. Depending on the cause, it can also automatically become up and in again. To try a manual activation via web interface, go to Any node → Ceph → OSD, select the OSD and click on Start, In and Reload. When using the shell, run following command on the affected node:
故障的 OSD 将在 10 分钟后被报告为 down 状态,并且大多数情况下会自动变为 out 状态。根据故障原因,它也可能自动恢复为 up 和 in 状态。若要通过网页界面尝试手动激活,请进入任意节点 → Ceph → OSD,选择该 OSD,然后点击启动、加入和重新加载。使用 Shell 时,在受影响的节点上运行以下命令:ceph-volume lvm activate --all
To activate a failed OSD, it may be necessary to safely reboot the respective node or, as a last resort, to recreate or replace the OSD.
要激活故障的 OSD,可能需要安全重启相应的节点,或者作为最后手段,重新创建或更换该 OSD。
9. Storage Replication
9. 存储复制
The pvesr command-line tool manages the Proxmox VE storage replication
framework. Storage replication brings redundancy for guests using
local storage and reduces migration time.
pvesr 命令行工具管理 Proxmox VE 存储复制框架。存储复制为使用本地存储的虚拟机提供冗余,并减少迁移时间。
It replicates guest volumes to another node so that all data is available
without using shared storage. Replication uses snapshots to minimize traffic
sent over the network. Therefore, new data is sent only incrementally after
the initial full sync. In the case of a node failure, your guest data is
still available on the replicated node.
它将来宾卷复制到另一个节点,使所有数据在不使用共享存储的情况下可用。复制使用快照来最小化通过网络发送的流量。因此,初始完全同步后,仅增量发送新数据。在节点故障的情况下,您的来宾数据仍然可以在复制的节点上使用。
The replication is done automatically in configurable intervals.
The minimum replication interval is one minute, and the maximal interval
once a week. The format used to specify those intervals is a subset of
systemd calendar events, see
Schedule Format section:
复制会在可配置的时间间隔内自动完成。最短复制间隔为一分钟,最长间隔为一周。用于指定这些间隔的格式是 systemd 日历事件的一个子集,详见计划格式部分:
It is possible to replicate a guest to multiple target nodes,
but not twice to the same target node.
可以将来宾复制到多个目标节点,但不能对同一目标节点进行两次复制。
Each replications bandwidth can be limited, to avoid overloading a storage
or server.
每个复制的带宽可以被限制,以避免存储或服务器过载。
Only changes since the last replication (so-called deltas) need to be
transferred if the guest is migrated to a node to which it already is
replicated. This reduces the time needed significantly. The replication
direction automatically switches if you migrate a guest to the replication
target node.
如果将虚拟机迁移到已经复制到的节点,则只需传输自上次复制以来的更改(所谓的增量)。这大大减少了所需时间。如果将虚拟机迁移到复制目标节点,复制方向会自动切换。
For example: VM100 is currently on nodeA and gets replicated to nodeB.
You migrate it to nodeB, so now it gets automatically replicated back from
nodeB to nodeA.
例如:VM100 当前位于 nodeA,并复制到 nodeB。将其迁移到 nodeB 后,它会自动从 nodeB 复制回 nodeA。
If you migrate to a node where the guest is not replicated, the whole disk
data must send over. After the migration, the replication job continues to
replicate this guest to the configured nodes.
如果迁移到虚拟机未复制的节点,则必须传输整个磁盘数据。迁移完成后,复制任务会继续将该虚拟机复制到配置的节点。
|
|
High-Availability is allowed in combination with storage replication, but there
may be some data loss between the last synced time and the time a node failed. |
9.1. Supported Storage Types
9.1. 支持的存储类型
| Description 描述 | Plugin type 插件类型 | Snapshots 快照 | Stable 稳定 |
|---|---|---|---|
ZFS (local) ZFS(本地) |
zfspool |
yes 是 |
yes 是 |
9.2. Schedule Format 9.2. 时间表格式
Replication uses calendar events for
configuring the schedule.
复制使用日历事件来配置时间表。
9.3. Error Handling 9.3. 错误处理
If a replication job encounters problems, it is placed in an error state.
In this state, the configured replication intervals get suspended
temporarily. The failed replication is repeatedly tried again in a
30 minute interval.
Once this succeeds, the original schedule gets activated again.
如果复制任务遇到问题,它将被置于错误状态。在此状态下,配置的复制间隔会暂时暂停。失败的复制任务会每隔 30 分钟重复尝试一次。一旦成功,原有的计划将重新激活。
9.3.1. Possible issues 9.3.1. 可能的问题
Some of the most common issues are in the following list. Depending on your
setup there may be another cause.
以下列表列出了一些最常见的问题。根据您的设置,可能还有其他原因。
-
Network is not working.
网络无法正常工作。 -
No free space left on the replication target storage.
复制目标存储空间已满。 -
Storage with the same storage ID is not available on the target node.
目标节点上没有具有相同存储 ID 的存储。
|
|
You can always use the replication log to find out what is causing the problem. 您可以随时使用复制日志来查明问题原因。 |
9.3.2. Migrating a guest in case of Error
9.3.2. 出现错误时迁移虚拟机
In the case of a grave error, a virtual guest may get stuck on a failed
node. You then need to move it manually to a working node again.
在发生严重错误的情况下,虚拟机可能会卡在故障节点上。此时需要手动将其迁移到一个正常工作的节点上。
9.3.3. Example 9.3.3. 示例
Let’s assume that you have two guests (VM 100 and CT 200) running on node A
and replicate to node B.
Node A failed and can not get back online. Now you have to migrate the guest
to Node B manually.
假设你有两个虚拟机(VM 100 和 CT 200)运行在节点 A 上,并且复制到节点 B。节点 A 故障且无法重新上线。现在你必须手动将虚拟机迁移到节点 B。
-
connect to node B over ssh or open its shell via the web UI
通过 SSH 连接到节点 B,或通过网页界面打开其 Shell -
check if that the cluster is quorate
检查集群是否达到法定人数# pvecm status
-
If you have no quorum, we strongly advise to fix this first and make the node operable again. Only if this is not possible at the moment, you may use the following command to enforce quorum on the current node:
如果没有法定人数,我们强烈建议先解决此问题,使节点恢复可操作状态。只有在当前无法解决的情况下,您才可以使用以下命令在当前节点上强制执行法定人数:# pvecm expected 1
|
|
Avoid changes which affect the cluster if expected votes are set
(for example adding/removing nodes, storages, virtual guests) at all costs.
Only use it to get vital guests up and running again or to resolve the quorum
issue itself. 避免进行影响集群的更改(例如添加/移除节点、存储、虚拟机)尤其是在预期投票数已设置的情况下。仅在需要让关键虚拟机重新运行或解决法定人数问题时使用该命令。 |
-
move both guest configuration files form the origin node A to node B:
将两个客户机配置文件从原始节点 A 移动到节点 B:# mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf # mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
-
Now you can start the guests again:
现在你可以重新启动客户机:# qm start 100 # pct start 200
Remember to replace the VMIDs and node names with your respective values.
记得将 VMID 和节点名称替换为你各自的值。
9.4. Managing Jobs 9.4. 管理任务
You can use the web GUI to create, modify, and remove replication jobs
easily. Additionally, the command-line interface (CLI) tool pvesr can be
used to do this.
您可以使用网页图形界面轻松创建、修改和删除复制任务。此外,也可以使用命令行界面(CLI)工具 pvesr 来完成这些操作。
You can find the replication panel on all levels (datacenter, node, virtual
guest) in the web GUI. They differ in which jobs get shown:
all, node- or guest-specific jobs.
您可以在网页图形界面的所有层级(数据中心、节点、虚拟客户机)找到复制面板。它们显示的任务有所不同:显示所有任务、节点特定任务或客户机特定任务。
When adding a new job, you need to specify the guest if not already selected
as well as the target node. The replication
schedule can be set if the default of all
15 minutes is not desired. You may impose a rate-limit on a replication
job. The rate limit can help to keep the load on the storage acceptable.
添加新任务时,如果尚未选择客户机,则需要指定客户机以及目标节点。如果不想使用默认的每 15 分钟一次的复制计划,可以设置复制时间表。您还可以对复制任务施加速率限制。速率限制有助于保持存储负载在可接受范围内。
A replication job is identified by a cluster-wide unique ID. This ID is
composed of the VMID in addition to a job number.
This ID must only be specified manually if the CLI tool is used.
复制任务由集群范围内唯一的 ID 标识。该 ID 由 VMID 和任务编号组成。只有在使用 CLI 工具时,才需要手动指定此 ID。
9.5. Network 9.5. 网络
Replication traffic will use the same network as the live guest migration. By
default, this is the management network. To use a different network for the
migration, configure the Migration Network in the web interface under
Datacenter -> Options -> Migration Settings or in the datacenter.cfg. See
Migration Network for more details.
复制流量将使用与实时客户机迁移相同的网络。默认情况下,这是管理网络。要为迁移使用不同的网络,请在网页界面中通过数据中心 -> 选项 -> 迁移设置配置迁移网络,或在 datacenter.cfg 中进行配置。更多详情请参见迁移网络。
9.6. Command-line Interface Examples
9.6. 命令行界面示例
Create a replication job which runs every 5 minutes with a limited bandwidth
of 10 Mbps (megabytes per second) for the guest with ID 100.
为 ID 为 100 的客户机创建一个复制任务,该任务每 5 分钟运行一次,带宽限制为 10 Mbps(兆字节每秒)。
# pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
Disable an active job with ID 100-0.
禁用 ID 为 100-0 的活动任务。
# pvesr disable 100-0
Enable a deactivated job with ID 100-0.
启用 ID 为 100-0 的已停用任务。
# pvesr enable 100-0
Change the schedule interval of the job with ID 100-0 to once per hour.
将 ID 为 100-0 的任务的调度间隔更改为每小时一次。
# pvesr update 100-0 --schedule '*/00'
10. QEMU/KVM Virtual Machines
10. QEMU/KVM 虚拟机
QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
physical computer. From the perspective of the host system where QEMU is
running, QEMU is a user program which has access to a number of local resources
like partitions, files, network cards which are then passed to an
emulated computer which sees them as if they were real devices.
QEMU(Quick Emulator 的缩写)是一款开源的虚拟机管理程序,用于模拟一台物理计算机。从运行 QEMU 的主机系统的角度来看,QEMU 是一个用户程序,它可以访问多个本地资源,如分区、文件、网卡,这些资源随后被传递给模拟的计算机,模拟计算机将其视为真实设备。
A guest operating system running in the emulated computer accesses these
devices, and runs as if it were running on real hardware. For instance, you can pass
an ISO image as a parameter to QEMU, and the OS running in the emulated computer
will see a real CD-ROM inserted into a CD drive.
运行在模拟计算机中的客户操作系统访问这些设备,并像在真实硬件上运行一样运行。例如,你可以将一个 ISO 镜像作为参数传递给 QEMU,运行在模拟计算机中的操作系统将看到一个真实的 CD-ROM 被插入到光驱中。
QEMU can emulate a great variety of hardware from ARM to Sparc, but Proxmox VE is
only concerned with 32 and 64 bits PC clone emulation, since it represents the
overwhelming majority of server hardware. The emulation of PC clones is also one
of the fastest due to the availability of processor extensions which greatly
speed up QEMU when the emulated architecture is the same as the host
architecture.
QEMU 可以模拟从 ARM 到 Sparc 的多种硬件,但 Proxmox VE 只关注 32 位和 64 位 PC 克隆机的模拟,因为它们代表了绝大多数服务器硬件。PC 克隆机的模拟也是最快的之一,因为处理器扩展的存在极大地加速了当模拟架构与主机架构相同时的 QEMU 运行速度。
|
|
You may sometimes encounter the term KVM (Kernel-based Virtual Machine).
It means that QEMU is running with the support of the virtualization processor
extensions, via the Linux KVM module. In the context of Proxmox VE QEMU and
KVM can be used interchangeably, as QEMU in Proxmox VE will always try to load the KVM
module. 您有时可能会遇到术语 KVM(基于内核的虚拟机)。它意味着 QEMU 在 Linux KVM 模块的支持下,通过虚拟化处理器扩展运行。在 Proxmox VE 的上下文中,QEMU 和 KVM 可以互换使用,因为 Proxmox VE 中的 QEMU 总是尝试加载 KVM 模块。 |
QEMU inside Proxmox VE runs as a root process, since this is required to access block
and PCI devices.
Proxmox VE 中的 QEMU 作为 root 进程运行,因为访问块设备和 PCI 设备需要这样做。
10.1. Emulated devices and paravirtualized devices
10.1. 模拟设备和准虚拟化设备
The PC hardware emulated by QEMU includes a motherboard, network controllers,
SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
the kvm(1) man page) all of them emulated in software. All these devices
are the exact software equivalent of existing hardware devices, and if the OS
running in the guest has the proper drivers it will use the devices as if it
were running on real hardware. This allows QEMU to run unmodified operating
systems.
QEMU 模拟的 PC 硬件包括主板、网络控制器、SCSI、IDE 和 SATA 控制器、串口(完整列表可见于 kvm(1)手册页),所有这些设备均通过软件模拟。这些设备都是现有硬件设备的精确软件等价物,如果运行在虚拟机中的操作系统拥有相应的驱动程序,它将像在真实硬件上一样使用这些设备。这使得 QEMU 能够运行未修改的操作系统。
This however has a performance cost, as running in software what was meant to
run in hardware involves a lot of extra work for the host CPU. To mitigate this,
QEMU can present to the guest operating system paravirtualized devices, where
the guest OS recognizes it is running inside QEMU and cooperates with the
hypervisor.
然而,这会带来性能上的代价,因为在软件中运行本应在硬件中运行的功能,会给主机 CPU 带来大量额外工作。为缓解这一问题,QEMU 可以向客户操作系统呈现半虚拟化设备,客户操作系统识别出自己运行在 QEMU 中,并与虚拟机管理程序协作。
QEMU relies on the virtio virtualization standard, and is thus able to present
paravirtualized virtio devices, which includes a paravirtualized generic disk
controller, a paravirtualized network card, a paravirtualized serial port,
a paravirtualized SCSI controller, etc …
QEMU 依赖于 virtio 虚拟化标准,因此能够呈现半虚拟化的 virtio 设备,包括半虚拟化的通用磁盘控制器、半虚拟化的网卡、半虚拟化的串口、半虚拟化的 SCSI 控制器等……
|
|
It is highly recommended to use the virtio devices whenever you can, as
they provide a big performance improvement and are generally better maintained.
Using the virtio generic disk controller versus an emulated IDE controller will
double the sequential write throughput, as measured with bonnie++(8). Using
the virtio network interface can deliver up to three times the throughput of an
emulated Intel E1000 network card, as measured with iperf(1). [34] 强烈建议尽可能使用 virtio 设备,因为它们提供了显著的性能提升,且通常维护得更好。使用 virtio 通用磁盘控制器代替模拟的 IDE 控制器,顺序写入吞吐量将翻倍(通过 bonnie++(8)测量)。使用 virtio 网络接口,吞吐量可达到模拟 Intel E1000 网卡的三倍(通过 iperf(1)测量)。[34] |
10.2. Virtual Machines Settings
10.2. 虚拟机设置
Generally speaking Proxmox VE tries to choose sane defaults for virtual machines
(VM). Make sure you understand the meaning of the settings you change, as it
could incur a performance slowdown, or putting your data at risk.
一般来说,Proxmox VE 会为虚拟机(VM)选择合理的默认设置。请确保您理解所更改设置的含义,因为这可能导致性能下降或使您的数据面临风险。
10.2.1. General Settings
10.2.1. 一般设置
-
the Node : the physical server on which the VM will run
节点:虚拟机将运行的物理服务器 -
the VM ID: a unique number in this Proxmox VE installation used to identify your VM
虚拟机 ID:在此 Proxmox VE 安装中用于识别您的虚拟机的唯一编号 -
Name: a free form text string you can use to describe the VM
名称:您可以用来自由描述虚拟机的文本字符串 -
Resource Pool: a logical group of VMs
资源池:虚拟机的逻辑分组
10.2.2. OS Settings 10.2.2. 操作系统设置
When creating a virtual machine (VM), setting the proper Operating System(OS)
allows Proxmox VE to optimize some low level parameters. For instance Windows OS
expect the BIOS clock to use the local time, while Unix based OS expect the
BIOS clock to have the UTC time.
创建虚拟机(VM)时,设置正确的操作系统(OS)可以让 Proxmox VE 优化一些底层参数。例如,Windows 操作系统期望 BIOS 时钟使用本地时间,而基于 Unix 的操作系统期望 BIOS 时钟使用 UTC 时间。
10.2.3. System Settings 10.2.3. 系统设置
On VM creation you can change some basic system components of the new VM. You
can specify which display type you want to use.
在创建虚拟机时,您可以更改新虚拟机的一些基本系统组件。您可以指定想要使用的显示类型。
Additionally, the SCSI controller can be changed.
If you plan to install the QEMU Guest Agent, or if your selected ISO image
already ships and installs it automatically, you may want to tick the QEMU
Agent box, which lets Proxmox VE know that it can use its features to show some
more information, and complete some actions (for example, shutdown or
snapshots) more intelligently.
此外,还可以更改 SCSI 控制器。如果您计划安装 QEMU Guest Agent,或者您选择的 ISO 镜像已经包含并自动安装了它,您可能需要勾选 QEMU Agent 选项,这样 Proxmox VE 就能知道可以使用其功能来显示更多信息,并更智能地完成一些操作(例如,关机或快照)。
Proxmox VE allows to boot VMs with different firmware and machine types, namely
SeaBIOS and OVMF. In most cases you want to switch from
the default SeaBIOS to OVMF only if you plan to use
PCIe passthrough.
Proxmox VE 允许使用不同的固件和机器类型启动虚拟机,即 SeaBIOS 和 OVMF。在大多数情况下,只有当您计划使用 PCIe 直通时,才需要将默认的 SeaBIOS 切换为 OVMF。
Machine Type 机器类型
A VM’s Machine Type defines the hardware layout of the VM’s virtual
motherboard. You can choose between the default
Intel 440FX or the
Q35
chipset, which also provides a virtual PCIe bus, and thus may be
desired if you want to pass through PCIe hardware.
Additionally, you can select a vIOMMU implementation.
虚拟机的机器类型定义了虚拟机虚拟主板的硬件布局。您可以选择默认的 Intel 440FX 或 Q35 芯片组,后者还提供了虚拟 PCIe 总线,因此如果您想直通 PCIe 硬件,可能会更合适。此外,您还可以选择 vIOMMU 实现。
Machine Version 机器版本
Each machine type is versioned in QEMU and a given QEMU binary supports many
machine versions. New versions might bring support for new features, fixes or
general improvements. However, they also change properties of the virtual
hardware. To avoid sudden changes from the guest’s perspective and ensure
compatibility of the VM state, live-migration and snapshots with RAM will keep
using the same machine version in the new QEMU instance.
每种机器类型在 QEMU 中都有版本,一个给定的 QEMU 二进制文件支持多种机器版本。新版本可能带来对新功能的支持、修复或整体改进。然而,它们也会改变虚拟硬件的属性。为了避免从客户机角度出现突变并确保虚拟机状态的兼容性,带有内存快照的实时迁移和快照将继续在新的 QEMU 实例中使用相同的机器版本。
For Windows guests, the machine version is pinned during creation, because
Windows is sensitive to changes in the virtual hardware - even between cold
boots. For example, the enumeration of network devices might be different with
different machine versions. Other OSes like Linux can usually deal with such
changes just fine. For those, the Latest machine version is used by default.
This means that after a fresh start, the newest machine version supported by the
QEMU binary is used (e.g. the newest machine version QEMU 8.1 supports is
version 8.1 for each machine type).
对于 Windows 客户机,机器版本在创建时被固定,因为 Windows 对虚拟硬件的变化非常敏感——即使是在冷启动之间。例如,不同机器版本下网络设备的枚举可能不同。其他操作系统如 Linux 通常可以很好地处理这些变化。对于这些系统,默认使用最新的机器版本。这意味着在全新启动后,使用的是 QEMU 二进制文件支持的最新机器版本(例如,QEMU 8.1 支持的每种机器类型的最新机器版本是 8.1)。
The machine version is also used as a safeguard when implementing new features
or fixes that would change the hardware layout to ensure backward compatibility.
For operations on a running VM, such as live migrations, the running machine
version is saved to ensure that the VM can be recovered exactly as it was, not
only from a QEMU virtualization perspective, but also in terms of how Proxmox VE will
create the QEMU virtual machine instance.
机器版本也用作在实现新功能或修复可能改变硬件布局时的保护措施,以确保向后兼容性。对于正在运行的虚拟机(VM)上的操作,例如实时迁移,运行中的机器版本会被保存,以确保虚拟机能够被准确恢复,不仅从 QEMU 虚拟化的角度来看,也包括 Proxmox VE 创建 QEMU 虚拟机实例的方式。
Sometimes Proxmox VE needs to make changes to the hardware layout or modify options
without waiting for a new QEMU release. For this, Proxmox VE has added an extra
downstream revision in the form of +pveX.
In these revisions, X is 0 for each new QEMU machine version and is omitted in
this case, e.g. machine version pc-q35-9.2 would be the same as machine
version pc-q35-9.2+pve0.
有时 Proxmox VE 需要对硬件布局进行更改或修改选项,而无需等待新的 QEMU 发布。为此,Proxmox VE 添加了一个额外的下游修订版本,形式为+pveX。在这些修订中,X 对于每个新的 QEMU 机器版本从 0 开始计数,并且在这种情况下可以省略,例如,机器版本 pc-q35-9.2 与机器版本 pc-q35-9.2+pve0 相同。
If Proxmox VE wants to change the hardware layout or a default option, the revision
is incremented and used for newly created guests or on reboot for VMs that
always use the latest machine version.
如果 Proxmox VE 想要更改硬件布局或默认选项,则会增加修订号,并用于新创建的客户机或始终使用最新机器版本的虚拟机重启时。
QEMU Machine Version Deprecation
QEMU 机器版本弃用
Starting with QEMU 10.1, machine versions are removed from upstream QEMU after 6
years. In Proxmox VE, major releases happen approximately every 2 years, so a major
Proxmox VE release will support machine versions from approximately two previous
major Proxmox VE releases.
从 QEMU 10.1 开始,机器版本在上游 QEMU 中会在 6 年后被移除。在 Proxmox VE 中,主要版本大约每 2 年发布一次,因此一个主要的 Proxmox VE 版本将支持大约前两个主要 Proxmox VE 版本的机器版本。
Before upgrading to a new major Proxmox VE release, you should update VM
configurations to avoid all machine versions that will be dropped during the
next major Proxmox VE release. This ensures that the guests can still be used
throughout that release. See the section
Update to a Newer Machine Version.
在升级到新的主要 Proxmox VE 版本之前,您应更新虚拟机配置,以避免所有将在下一个主要 Proxmox VE 版本中被弃用的机器版本。这确保了虚拟机在该版本周期内仍然可用。请参阅章节“更新到更新的机器版本”。
The removal policy is not yet in effect for Proxmox VE 8, so the baseline for
supported machine versions is 2.4. The last QEMU binary version released for
Proxmox VE 9 is expected to be QEMU 11.2. This QEMU binary will remove support for
machine versions older than 6.0, so 6.0 is the baseline for the Proxmox VE 9 release
life cycle. The baseline is expected to increase by 2 major versions for each
major Proxmox VE release, for example 8.0 for Proxmox VE 10.
该移除政策尚未在 Proxmox VE 8 中生效,因此支持的机器版本基线为 2.4。预计 Proxmox VE 9 发布的最后一个 QEMU 二进制版本为 QEMU 11.2。该 QEMU 二进制版本将移除对早于 6.0 机器版本的支持,因此 6.0 是 Proxmox VE 9 版本生命周期的基线。预计每个主要 Proxmox VE 版本,基线将增加 2 个主要版本,例如 Proxmox VE 10 的基线为 8.0。
Update to a Newer Machine Version
更新到更新的机器版本
If you see a deprecation warning, you should change the machine version to a
newer one. Be sure to have a working backup first and be prepared for changes to
how the guest sees hardware. In some scenarios, re-installing certain drivers
might be required. You should also check for snapshots with RAM that were taken
with these machine versions (i.e. the runningmachine configuration entry).
Unfortunately, there is no way to change the machine version of a snapshot, so
you’d need to load the snapshot to salvage any data from it.
如果看到弃用警告,您应该将机器版本更改为更新的版本。请确保先有一个可用的备份,并准备好应对客户机硬件视图的变化。在某些情况下,可能需要重新安装某些驱动程序。您还应检查使用这些机器版本(即 runningmachine 配置条目)拍摄的带有内存的快照。不幸的是,快照的机器版本无法更改,因此您需要加载快照以从中恢复任何数据。
10.2.4. Hard Disk 10.2.4. 硬盘
Bus/Controller 总线/控制器
QEMU can emulate a number of storage controllers:
QEMU 可以模拟多种存储控制器:
|
|
It is highly recommended to use the VirtIO SCSI or VirtIO Block
controller for performance reasons and because they are better maintained. 强烈建议使用 VirtIO SCSI 或 VirtIO Block 控制器,原因是性能更好且维护更完善。 |
-
the IDE controller, has a design which goes back to the 1984 PC/AT disk controller. Even if this controller has been superseded by recent designs, each and every OS you can think of has support for it, making it a great choice if you want to run an OS released before 2003. You can connect up to 4 devices on this controller.
IDE 控制器的设计可以追溯到 1984 年的 PC/AT 磁盘控制器。尽管该控制器已被较新的设计取代,但几乎所有操作系统都支持它,因此如果你想运行 2003 年之前发布的操作系统,这是一个很好的选择。该控制器最多可连接 4 个设备。 -
the SATA (Serial ATA) controller, dating from 2003, has a more modern design, allowing higher throughput and a greater number of devices to be connected. You can connect up to 6 devices on this controller.
SATA(串行 ATA)控制器起源于 2003 年,设计更现代,支持更高的吞吐量和更多设备的连接。该控制器最多可连接 6 个设备。 -
the SCSI controller, designed in 1985, is commonly found on server grade hardware, and can connect up to 14 storage devices. Proxmox VE emulates by default a LSI 53C895A controller.
SCSI 控制器,设计于 1985 年,常见于服务器级硬件上,最多可连接 14 个存储设备。Proxmox VE 默认模拟 LSI 53C895A 控制器。A SCSI controller of type VirtIO SCSI single and enabling the IO Thread setting for the attached disks is recommended if you aim for performance. This is the default for newly created Linux VMs since Proxmox VE 7.3. Each disk will have its own VirtIO SCSI controller, and QEMU will handle the disks IO in a dedicated thread. Linux distributions have support for this controller since 2012, and FreeBSD since 2014. For Windows OSes, you need to provide an extra ISO containing the drivers during the installation.
如果追求性能,建议使用 VirtIO SCSI 类型的 SCSI 控制器,并为所连接的磁盘启用 IO 线程设置。自 Proxmox VE 7.3 起,新创建的 Linux 虚拟机默认采用此设置。每个磁盘将拥有自己的 VirtIO SCSI 控制器,QEMU 将在专用线程中处理磁盘 IO。Linux 发行版自 2012 年起支持该控制器,FreeBSD 自 2014 年起支持。对于 Windows 操作系统,安装时需要提供包含驱动程序的额外 ISO。 -
The VirtIO Block controller, often just called VirtIO or virtio-blk, is an older type of paravirtualized controller. It has been superseded by the VirtIO SCSI Controller, in terms of features.
VirtIO Block 控制器,通常简称为 VirtIO 或 virtio-blk,是一种较旧的半虚拟化控制器类型。在功能方面,它已被 VirtIO SCSI 控制器取代。
Image Format 镜像格式
On each controller you attach a number of emulated hard disks, which are backed
by a file or a block device residing in the configured storage. The choice of
a storage type will determine the format of the hard disk image. Storages which
present block devices (LVM, ZFS, Ceph) will require the raw disk image format,
whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
either the raw disk image format or the QEMU image format.
在每个控制器上,您可以连接多个模拟硬盘,这些硬盘由配置存储中的文件或区块设备支持。存储类型的选择将决定硬盘镜像的格式。提供区块设备的存储(如 LVM、ZFS、Ceph)需要使用原始磁盘镜像格式,而基于文件的存储(如 Ext4、NFS、CIFS、GlusterFS)则允许您选择原始磁盘镜像格式或 QEMU 镜像格式。
-
the QEMU image format is a copy on write format which allows snapshots, and thin provisioning of the disk image.
QEMU 镜像格式是一种写时复制格式,支持快照和磁盘镜像的精简配置。 -
the raw disk image is a bit-to-bit image of a hard disk, similar to what you would get when executing the dd command on a block device in Linux. This format does not support thin provisioning or snapshots by itself, requiring cooperation from the storage layer for these tasks. It may, however, be up to 10% faster than the QEMU image format. [35]
原始磁盘镜像是硬盘的逐位映像,类似于在 Linux 中对区块设备执行 dd 命令得到的结果。该格式本身不支持精简配置或快照,这些功能需要存储层的配合。不过,它的速度可能比 QEMU 镜像格式快约 10%。[35] -
the VMware image format only makes sense if you intend to import/export the disk image to other hypervisors.
VMware 镜像格式仅在您打算将磁盘镜像导入/导出到其他虚拟机管理程序时才有意义。
Cache Mode 缓存模式
Setting the Cache mode of the hard drive will impact how the host system will
notify the guest systems of block write completions. The No cache default
means that the guest system will be notified that a write is complete when each
block reaches the physical storage write queue, ignoring the host page cache.
This provides a good balance between safety and speed.
设置硬盘的缓存模式将影响主机系统如何通知客户系统区块写入完成。默认的无缓存意味着当每个区块到达物理存储写入队列时,客户系统将被通知写入完成,忽略主机页面缓存。这在安全性和速度之间提供了良好的平衡。
If you want the Proxmox VE backup manager to skip a disk when doing a backup of a VM,
you can set the No backup option on that disk.
如果您希望 Proxmox VE 备份管理器在备份虚拟机时跳过某个硬盘,可以在该硬盘上设置“无备份”选项。
If you want the Proxmox VE storage replication mechanism to skip a disk when starting
a replication job, you can set the Skip replication option on that disk.
As of Proxmox VE 5.0, replication requires the disk images to be on a storage of type
zfspool, so adding a disk image to other storages when the VM has replication
configured requires to skip replication for this disk image.
如果您希望 Proxmox VE 存储复制机制在启动复制任务时跳过某个硬盘,可以在该硬盘上设置“跳过复制”选项。从 Proxmox VE 5.0 开始,复制要求磁盘镜像位于类型为 zfspool 的存储上,因此当虚拟机配置了复制时,将磁盘镜像添加到其他存储时需要为该磁盘镜像设置跳过复制。
Trim/Discard
If your storage supports thin provisioning (see the storage chapter in the
Proxmox VE guide), you can activate the Discard option on a drive. With Discard
set and a TRIM-enabled guest OS [36], when the VM’s filesystem
marks blocks as unused after deleting files, the controller will relay this
information to the storage, which will then shrink the disk image accordingly.
For the guest to be able to issue TRIM commands, you must enable the Discard
option on the drive. Some guest operating systems may also require the
SSD Emulation flag to be set. Note that Discard on VirtIO Block drives is
only supported on guests using Linux Kernel 5.0 or higher.
如果您的存储支持精简配置(请参阅 Proxmox VE 指南中的存储章节),您可以在驱动器上启用 Discard 选项。启用 Discard 且客户操作系统支持 TRIM 功能[36]时,当虚拟机的文件系统在删除文件后将区块标记为未使用,控制器会将此信息传递给存储,存储随后会相应地缩减磁盘映像。为了让客户机能够发出 TRIM 命令,必须在驱动器上启用 Discard 选项。某些客户操作系统可能还需要设置 SSD 仿真标志。请注意,VirtIO Block 驱动器上的 Discard 仅支持使用 Linux 内核 5.0 或更高版本的客户机。
If you would like a drive to be presented to the guest as a solid-state drive
rather than a rotational hard disk, you can set the SSD emulation option on
that drive. There is no requirement that the underlying storage actually be
backed by SSDs; this feature can be used with physical media of any type.
Note that SSD emulation is not supported on VirtIO Block drives.
如果您希望将驱动器呈现为固态硬盘而非旋转硬盘,可以在该驱动器上设置 SSD 仿真选项。底层存储不必实际由 SSD 支持;此功能可用于任何类型的物理介质。请注意,VirtIO Block 驱动器不支持 SSD 仿真。
IO Thread IO 线程
The option IO Thread can only be used when using a disk with the VirtIO
controller, or with the SCSI controller, when the emulated controller type is
VirtIO SCSI single. With IO Thread enabled, QEMU creates one I/O thread per
storage controller rather than handling all I/O in the main event loop or vCPU
threads. One benefit is better work distribution and utilization of the
underlying storage. Another benefit is reduced latency (hangs) in the guest for
very I/O-intensive host workloads, since neither the main thread nor a vCPU
thread can be blocked by disk I/O.
IO 线程选项仅能在使用 VirtIO 控制器的磁盘,或在仿真控制器类型为 VirtIO SCSI single 的 SCSI 控制器时使用。启用 IO 线程后,QEMU 会为每个存储控制器创建一个 I/O 线程,而不是在主事件循环或 vCPU 线程中处理所有 I/O。这样做的一个好处是更好地分配工作和利用底层存储资源。另一个好处是在主机进行非常 I/O 密集型工作负载时,减少来宾系统的延迟(卡顿),因为磁盘 I/O 不会阻塞主线程或 vCPU 线程。
10.2.5. CPU
A CPU socket is a physical slot on a PC motherboard where you can plug a CPU.
This CPU can then contain one or many cores, which are independent
processing units. Whether you have a single CPU socket with 4 cores, or two CPU
sockets with two cores is mostly irrelevant from a performance point of view.
However some software licenses depend on the number of sockets a machine has,
in that case it makes sense to set the number of sockets to what the license
allows you.
CPU 插槽是 PC 主板上的一个物理插槽,可以插入 CPU。该 CPU 可以包含一个或多个核心,核心是独立的处理单元。从性能角度来看,拥有一个 4 核的 CPU 插槽,还是两个 2 核的 CPU 插槽,基本上没有太大区别。然而,一些软件许可取决于机器的插槽数量,在这种情况下,将插槽数量设置为许可允许的数量是有意义的。
Increasing the number of virtual CPUs (cores and sockets) will usually provide a
performance improvement though that is heavily dependent on the use of the VM.
Multi-threaded applications will of course benefit from a large number of
virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
execution on the host system. If you’re not sure about the workload of your VM,
it is usually a safe bet to set the number of Total cores to 2.
增加虚拟 CPU 的数量(核心和插槽)通常会带来性能提升,但这在很大程度上取决于虚拟机的使用情况。多线程应用程序当然会从大量虚拟 CPU 中受益,因为每增加一个虚拟 CPU,QEMU 将在主机系统上创建一个新的执行线程。如果您不确定虚拟机的工作负载,通常将总核心数设置为 2 是一个安全的选择。
|
|
It is perfectly safe if the overall number of cores of all your VMs
is greater than the number of cores on the server (for example, 4 VMs each with
4 cores (= total 16) on a machine with only 8 cores). In that case the host
system will balance the QEMU execution threads between your server cores, just
like if you were running a standard multi-threaded application. However, Proxmox VE
will prevent you from starting VMs with more virtual CPU cores than physically
available, as this will only bring the performance down due to the cost of
context switches. 如果所有虚拟机的核心总数超过服务器上的核心数(例如,4 台虚拟机每台 4 个核心(=总共 16 个核心),而机器只有 8 个核心),这也是完全安全的。在这种情况下,主机系统会在服务器核心之间平衡 QEMU 执行线程,就像运行标准多线程应用程序一样。然而,Proxmox VE 会阻止您启动虚拟 CPU 核心数超过物理可用核心数的虚拟机,因为这只会由于上下文切换的开销而降低性能。 |
Resource Limits 资源限制
cpulimit
In addition to the number of virtual cores, the total available “Host CPU
Time” for the VM can be set with the cpulimit option. It is a floating point
value representing CPU time in percent, so 1.0 is equal to 100%, 2.5 to
250% and so on. If a single process would fully use one single core it would
have 100% CPU Time usage. If a VM with four cores utilizes all its cores
fully it would theoretically use 400%. In reality the usage may be even a bit
higher as QEMU can have additional threads for VM peripherals besides the vCPU
core ones.
除了虚拟核心数量外,还可以通过 cpulimit 选项设置虚拟机的总可用“主机 CPU 时间”。这是一个浮点值,表示 CPU 时间的百分比,因此 1.0 等于 100%,2.5 等于 250%,依此类推。如果单个进程完全使用一个核心,则其 CPU 时间使用率为 100%。如果一个拥有四个核心的虚拟机完全利用所有核心,理论上其使用率为 400%。实际上,使用率可能会更高一些,因为除了 vCPU 核心线程外,QEMU 还可能有用于虚拟机外设的额外线程。
This setting can be useful when a VM should have multiple vCPUs because it is
running some processes in parallel, but the VM as a whole should not be able to
run all vCPUs at 100% at the same time.
当虚拟机需要多个 vCPU 来并行运行某些进程,但整体上不应允许所有 vCPU 同时达到 100% 使用率时,此设置非常有用。
For example, suppose you have a virtual machine that would benefit from having 8
virtual CPUs, but you don’t want the VM to be able to max out all 8 cores
running at full load - because that would overload the server and leave other
virtual machines and containers with too little CPU time. To solve this, you
could set cpulimit to 4.0 (=400%). This means that if the VM fully utilizes
all 8 virtual CPUs by running 8 processes simultaneously, each vCPU will receive
a maximum of 50% CPU time from the physical cores. However, if the VM workload
only fully utilizes 4 virtual CPUs, it could still receive up to 100% CPU time
from a physical core, for a total of 400%.
例如,假设你有一台虚拟机,配置了 8 个虚拟 CPU,但你不希望该虚拟机能够在满负载时使用全部 8 个核心——因为那样会导致服务器过载,其他虚拟机和容器的 CPU 时间会不足。为了解决这个问题,你可以将 cpulimit 设置为 4.0(即 400%)。这意味着如果虚拟机通过同时运行 8 个进程来充分利用所有 8 个虚拟 CPU,每个虚拟 CPU 将从物理核心获得最多 50%的 CPU 时间。然而,如果虚拟机的工作负载只充分利用 4 个虚拟 CPU,它仍然可以从物理核心获得最多 100%的 CPU 时间,总计 400%。
|
|
VMs can, depending on their configuration, use additional threads, such
as for networking or IO operations but also live migration. Thus a VM can show
up to use more CPU time than just its virtual CPUs could use. To ensure that a
VM never uses more CPU time than vCPUs assigned, set the cpulimit to
the same value as the total core count. 虚拟机根据其配置,可能会使用额外的线程,例如用于网络或 IO 操作,也包括实时迁移。因此,虚拟机显示的 CPU 使用时间可能超过其虚拟 CPU 的使用时间。为了确保虚拟机使用的 CPU 时间永远不会超过分配的虚拟 CPU 数量,应将 cpulimit 设置为与总核心数相同的值。 |
cpuunits
With the cpuunits option, nowadays often called CPU shares or CPU weight, you
can control how much CPU time a VM gets compared to other running VMs. It is a
relative weight which defaults to 100 (or 1024 if the host uses legacy
cgroup v1). If you increase this for a VM it will be prioritized by the
scheduler in comparison to other VMs with lower weight.
使用 cpuunits 选项,现在通常称为 CPU 份额或 CPU 权重,您可以控制一个虚拟机相对于其他正在运行的虚拟机获得多少 CPU 时间。这是一个相对权重,默认值为 100(如果主机使用传统的 cgroup v1,则为 1024)。如果您为某个虚拟机增加此值,调度程序会优先考虑该虚拟机,相对于权重较低的其他虚拟机。
For example, if VM 100 has set the default 100 and VM 200 was changed to
200, the latter VM 200 would receive twice the CPU bandwidth than the first
VM 100.
例如,如果虚拟机 100 设置为默认的 100,而虚拟机 200 被更改为 200,那么后者虚拟机 200 将获得比前者虚拟机 100 多两倍的 CPU 带宽。
For more information see man systemd.resource-control, here CPUQuota
corresponds to cpulimit and CPUWeight to our cpuunits setting. Visit its
Notes section for references and implementation details.
更多信息请参见 man systemd.resource-control,其中 CPUQuota 对应于 cpulimit,CPUWeight 对应于我们的 cpuunits 设置。请访问其注释部分以获取参考资料和实现细节。
affinity
With the affinity option, you can specify the physical CPU cores that are used
to run the VM’s vCPUs. Peripheral VM processes, such as those for I/O, are not
affected by this setting. Note that the CPU affinity is not a security
feature.
使用亲和性选项,您可以指定用于运行虚拟机 vCPU 的物理 CPU 核心。外围虚拟机进程,例如用于 I/O 的进程,不受此设置影响。请注意,CPU 亲和性并不是一种安全功能。
Forcing a CPU affinity can make sense in certain cases but is accompanied by
an increase in complexity and maintenance effort. For example, if you want to
add more VMs later or migrate VMs to nodes with fewer CPU cores. It can also
easily lead to asynchronous and therefore limited system performance if some
CPUs are fully utilized while others are almost idle.
强制设置 CPU 亲和性在某些情况下是有意义的,但会增加复杂性和维护工作量。例如,如果您以后想添加更多虚拟机或将虚拟机迁移到 CPU 核心较少的节点。它也很容易导致异步,从而限制系统性能,如果某些 CPU 被完全利用而其他 CPU 几乎空闲。
The affinity is set through the taskset CLI tool. It accepts the host CPU
numbers (see lscpu) in the List Format from man cpuset. This ASCII decimal
list can contain numbers but also number ranges. For example, the affinity
0-1,8-11 (expanded 0, 1, 8, 9, 10, 11) would allow the VM to run on only
these six specific host cores.
亲和性通过 taskset 命令行工具设置。它接受主机 CPU 编号(参见 lscpu)并采用 man cpuset 中的列表格式。该 ASCII 十进制列表可以包含数字,也可以包含数字范围。例如,亲和性 0-1,8-11(展开为 0, 1, 8, 9, 10, 11)将允许虚拟机仅在这六个特定的主机核心上运行。
CPU Type CPU 类型
QEMU can emulate a number different of CPU types from 486 to the latest Xeon
processors. Each new processor generation adds new features, like hardware
assisted 3d rendering, random number generation, memory protection, etc. Also,
a current generation can be upgraded through
microcode update with bug or security fixes.
QEMU 可以模拟多种不同的 CPU 类型,从 486 到最新的 Xeon 处理器。每一代新处理器都会增加新功能,比如硬件辅助的 3D 渲染、随机数生成、内存保护等。此外,当前代处理器还可以通过微代码更新来修复漏洞或安全问题。
Usually you should select for your VM a processor type which closely matches the
CPU of the host system, as it means that the host CPU features (also called CPU
flags ) will be available in your VMs. If you want an exact match, you can set
the CPU type to host in which case the VM will have exactly the same CPU flags
as your host system.
通常,你应该为虚拟机选择一个与主机系统 CPU 类型相近的处理器类型,因为这意味着主机 CPU 的特性(也称为 CPU 标志)将在你的虚拟机中可用。如果你想要完全匹配,可以将 CPU 类型设置为 host,这样虚拟机将拥有与主机系统完全相同的 CPU 标志。
This has a downside though. If you want to do a live migration of VMs between
different hosts, your VM might end up on a new system with a different CPU type
or a different microcode version.
If the CPU flags passed to the guest are missing, the QEMU process will stop. To
remedy this QEMU has also its own virtual CPU types, that Proxmox VE uses by default.
不过,这也有一个缺点。如果你想在不同主机之间进行虚拟机的实时迁移,虚拟机可能会运行在一个具有不同 CPU 类型或不同微代码版本的新系统上。如果传递给客户机的 CPU 标志缺失,QEMU 进程将会停止。为了解决这个问题,QEMU 还提供了自己的虚拟 CPU 类型,Proxmox VE 默认使用这些类型。
The backend default is kvm64 which works on essentially all x86_64 host CPUs
and the UI default when creating a new VM is x86-64-v2-AES, which requires a
host CPU starting from Westmere for Intel or at least a fourth generation
Opteron for AMD.
后端默认是 kvm64,几乎适用于所有 x86_64 主机 CPU;而在创建新虚拟机时,界面默认使用的是 x86-64-v2-AES,这要求主机 CPU 至少是 Intel 的 Westmere 或 AMD 的第四代 Opteron。
In short: 简而言之:
If you don’t care about live migration or have a homogeneous cluster where all
nodes have the same CPU and same microcode version, set the CPU type to host, as
in theory this will give your guests maximum performance.
如果你不关心实时迁移,或者拥有一个所有节点 CPU 和微代码版本都相同的同质集群,建议将 CPU 类型设置为 host,理论上这将为你的虚拟机提供最大性能。
If you care about live migration and security, and you have only Intel CPUs or
only AMD CPUs, choose the lowest generation CPU model of your cluster.
如果你关心实时迁移和安全性,并且集群中只有 Intel CPU 或只有 AMD CPU,选择集群中最低代的 CPU 型号。
If you care about live migration without security, or have mixed Intel/AMD
cluster, choose the lowest compatible virtual QEMU CPU type.
如果您关心无安全性的实时迁移,或拥有混合的 Intel/AMD 集群,请选择最低兼容的虚拟 QEMU CPU 类型。
|
|
Live migrations between Intel and AMD host CPUs have no guarantee to work. Intel 和 AMD 主机 CPU 之间的实时迁移无法保证可行。 |
See also
List of AMD and Intel CPU Types as Defined in QEMU.
另请参见 QEMU 中定义的 AMD 和 Intel CPU 类型列表。
QEMU CPU Types QEMU CPU 类型
QEMU also provide virtual CPU types, compatible with both Intel and AMD host
CPUs.
QEMU 还提供了虚拟 CPU 类型,兼容 Intel 和 AMD 主机 CPU。
|
|
To mitigate the Spectre vulnerability for virtual CPU types, you need to
add the relevant CPU flags, see
Meltdown / Spectre related CPU flags. 为了缓解虚拟 CPU 类型的 Spectre 漏洞,您需要添加相关的 CPU 标志,详见 Meltdown / Spectre 相关的 CPU 标志。 |
Historically, Proxmox VE had the kvm64 CPU model, with CPU flags at the level of
Pentium 4 enabled, so performance was not great for certain workloads.
历史上,Proxmox VE 使用的是 kvm64 CPU 模型,启用了相当于 Pentium 4 级别的 CPU 标志,因此在某些工作负载下性能表现不佳。
In the summer of 2020, AMD, Intel, Red Hat, and SUSE collaborated to define
three x86-64 microarchitecture levels on top of the x86-64 baseline, with modern
flags enabled. For details, see the
x86-64-ABI specification.
2020 年夏季,AMD、Intel、Red Hat 和 SUSE 共同合作,在 x86-64 基线之上定义了三个 x86-64 微架构级别,启用了现代标志。详情请参见 x86-64-ABI 规范。
|
|
Some newer distributions like CentOS 9 are now built with x86-64-v2
flags as a minimum requirement. 一些较新的发行版,如 CentOS 9,现在将 x86-64-v2 标志作为最低要求。 |
-
kvm64 (x86-64-v1): Compatible with Intel CPU >= Pentium 4, AMD CPU >= Phenom.
kvm64(x86-64-v1):兼容 Intel CPU >= Pentium 4,AMD CPU >= Phenom。 -
x86-64-v2: Compatible with Intel CPU >= Nehalem, AMD CPU >= Opteron_G3. Added CPU flags compared to x86-64-v1: +cx16, +lahf-lm, +popcnt, +pni, +sse4.1, +sse4.2, +ssse3.
x86-64-v2:兼容 Intel CPU >= Nehalem,AMD CPU >= Opteron_G3。相比 x86-64-v1 新增的 CPU 标志:+cx16,+lahf-lm,+popcnt,+pni,+sse4.1,+sse4.2,+ssse3。 -
x86-64-v2-AES: Compatible with Intel CPU >= Westmere, AMD CPU >= Opteron_G4. Added CPU flags compared to x86-64-v2: +aes.
x86-64-v2-AES:兼容 Intel CPU >= Westmere,AMD CPU >= Opteron_G4。相比 x86-64-v2 新增的 CPU 标志:+aes。 -
x86-64-v3: Compatible with Intel CPU >= Broadwell, AMD CPU >= EPYC. Added CPU flags compared to x86-64-v2-AES: +avx, +avx2, +bmi1, +bmi2, +f16c, +fma, +movbe, +xsave.
x86-64-v3:兼容 Intel CPU >= Broadwell,AMD CPU >= EPYC。相比 x86-64-v2-AES 新增 CPU 标志:+avx,+avx2,+bmi1,+bmi2,+f16c,+fma,+movbe,+xsave。 -
x86-64-v4: Compatible with Intel CPU >= Skylake, AMD CPU >= EPYC v4 Genoa. Added CPU flags compared to x86-64-v3: +avx512f, +avx512bw, +avx512cd, +avx512dq, +avx512vl.
x86-64-v4:兼容 Intel CPU >= Skylake,AMD CPU >= EPYC v4 Genoa。相比 x86-64-v3 新增 CPU 标志:+avx512f,+avx512bw,+avx512cd,+avx512dq,+avx512vl。
Custom CPU Types 自定义 CPU 类型
You can specify custom CPU types with a configurable set of features. These are
maintained in the configuration file /etc/pve/virtual-guest/cpu-models.conf by
an administrator. See man cpu-models.conf for format details.
您可以指定具有可配置特性的自定义 CPU 类型。这些由管理员维护在配置文件 /etc/pve/virtual-guest/cpu-models.conf 中。有关格式详情,请参见 man cpu-models.conf。
Specified custom types can be selected by any user with the Sys.Audit
privilege on /nodes. When configuring a custom CPU type for a VM via the CLI
or API, the name needs to be prefixed with custom-.
具有 /nodes 上 Sys.Audit 权限的任何用户都可以选择指定的自定义类型。在通过 CLI 或 API 为虚拟机配置自定义 CPU 类型时,名称需要以 custom- 为前缀。
Meltdown / Spectre related CPU flags
Meltdown / Spectre 相关的 CPU 标志
There are several CPU flags related to the Meltdown and Spectre vulnerabilities
[37] which need to be set
manually unless the selected CPU type of your VM already enables them by default.
有几个与 Meltdown 和 Spectre 漏洞相关的 CPU 标志 [37] 需要手动设置,除非您虚拟机选择的 CPU 类型默认已启用它们。
There are two requirements that need to be fulfilled in order to use these
CPU flags:
要使用这些 CPU 标志,需要满足两个条件:
-
The host CPU(s) must support the feature and propagate it to the guest’s virtual CPU(s)
主机 CPU 必须支持该功能并将其传递给客户机的虚拟 CPU。 -
The guest operating system must be updated to a version which mitigates the attacks and is able to utilize the CPU feature
客户机操作系统必须更新到能够缓解攻击并利用该 CPU 功能的版本。
Otherwise you need to set the desired CPU flag of the virtual CPU, either by
editing the CPU options in the web UI, or by setting the flags property of the
cpu option in the VM configuration file.
否则,您需要设置虚拟 CPU 的所需 CPU 标志,可以通过编辑 Web UI 中的 CPU 选项,或在虚拟机配置文件中设置 cpu 选项的 flags 属性来实现。
For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
so-called “microcode update” for your CPU, see
chapter Firmware Updates. Note that not all
affected CPUs can be updated to support spec-ctrl.
对于 Spectre v1、v2、v4 的修复,您的 CPU 或系统供应商还需要为您的 CPU 提供所谓的“微代码更新”,详见固件更新章节。请注意,并非所有受影响的 CPU 都能通过更新支持 spec-ctrl。
To check if the Proxmox VE host is vulnerable, execute the following command as root:
要检查 Proxmox VE 主机是否存在漏洞,请以 root 身份执行以下命令:
for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
A community script is also available to detect if the host is still vulnerable.
[38]
社区还提供了一个脚本,用于检测主机是否仍然存在漏洞。[ 38]
Intel processors Intel 处理器
-
pcid
This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation called Kernel Page-Table Isolation (KPTI), which effectively hides the Kernel memory from the user space. Without PCID, KPTI is quite an expensive mechanism [39].
这减少了针对 Meltdown(CVE-2017-5754)漏洞的缓解措施——内核页表隔离(KPTI)——对性能的影响,该措施有效地将内核内存从用户空间隐藏起来。没有 PCID,KPTI 是一种相当昂贵的机制[39]。To check if the Proxmox VE host supports PCID, execute the following command as root:
要检查 Proxmox VE 主机是否支持 PCID,请以 root 身份执行以下命令:# grep ' pcid ' /proc/cpuinfo
If this does not return empty your host’s CPU has support for pcid.
如果该命令的输出不为空,则说明主机的 CPU 支持 PCID。 -
spec-ctrl
Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, in cases where retpolines are not sufficient. Included by default in Intel CPU models with -IBRS suffix. Must be explicitly turned on for Intel CPU models without -IBRS suffix. Requires an updated host CPU microcode (intel-microcode >= 20180425).
在 retpolines 不足以防护的情况下,需启用 Spectre v1(CVE-2017-5753)和 Spectre v2(CVE-2017-5715)修复。默认包含于带有-IBRS 后缀的 Intel CPU 型号中。对于不带-IBRS 后缀的 Intel CPU 型号,必须显式开启。需要更新的主机 CPU 微代码(intel-microcode >= 20180425)。 -
ssbd
Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model. Must be explicitly turned on for all Intel CPU models. Requires an updated host CPU microcode(intel-microcode >= 20180703).
需启用 Spectre V4(CVE-2018-3639)修复。任何 Intel CPU 型号默认均不包含。必须对所有 Intel CPU 型号显式开启。需要更新的主机 CPU 微代码(intel-microcode >= 20180703)。
AMD processors AMD 处理器
-
ibpb
Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, in cases where retpolines are not sufficient. Included by default in AMD CPU models with -IBPB suffix. Must be explicitly turned on for AMD CPU models without -IBPB suffix. Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
在 retpolines 不足以解决问题的情况下,启用 Spectre v1(CVE-2017-5753)和 Spectre v2(CVE-2017-5715)修复所必需。默认包含在带有-IBPB 后缀的 AMD CPU 型号中。对于没有-IBPB 后缀的 AMD CPU 型号,必须显式开启。使用此功能之前,主机 CPU 微代码必须支持该功能,才能用于客户机 CPU。 -
virt-ssbd
Required to enable the Spectre v4 (CVE-2018-3639) fix. Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models. This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility. Note that this must be explicitly enabled when when using the "host" cpu model, because this is a virtual feature which does not exist in the physical CPUs.
启用 Spectre v4(CVE-2018-3639)修复所必需。任何 AMD CPU 型号默认均不包含此功能。必须对所有 AMD CPU 型号显式开启。即使同时提供 amd-ssbd,也应向客户机提供此功能,以实现最大兼容性。请注意,使用“host” CPU 模型时必须显式启用此功能,因为这是一个虚拟特性,物理 CPU 中不存在。 -
amd-ssbd
Required to enable the Spectre v4 (CVE-2018-3639) fix. Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models. This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible. virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
需要启用 Spectre v4(CVE-2018-3639)修复。默认情况下,任何 AMD CPU 型号均未包含此功能。必须对所有 AMD CPU 型号显式开启。该功能比 virt-ssbd 性能更高,因此支持此功能的主机应尽可能向虚拟机暴露此功能。尽管如此,为了最大程度的虚拟机兼容性,仍应暴露 virt-ssbd,因为某些内核仅识别 virt-ssbd。 -
amd-no-ssb
Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639). Not included by default in any AMD CPU model. Future hardware generations of CPU will not be vulnerable to CVE-2018-3639, and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb. This is mutually exclusive with virt-ssbd and amd-ssbd.
建议用于表明主机不易受 Spectre V4(CVE-2018-3639)影响。默认情况下,任何 AMD CPU 型号均未包含此功能。未来的 CPU 硬件代际将不再易受 CVE-2018-3639 影响,因此应通过暴露 amd-no-ssb 告知虚拟机无需启用其缓解措施。此选项与 virt-ssbd 和 amd-ssbd 互斥。
NUMA
You can also optionally emulate a NUMA
[40] architecture
in your VMs. The basics of the NUMA architecture mean that instead of having a
global memory pool available to all your cores, the memory is spread into local
banks close to each socket.
This can bring speed improvements as the memory bus is not a bottleneck
anymore. If your system has a NUMA architecture [41] we recommend to activate the option, as this
will allow proper distribution of the VM resources on the host system.
This option is also required to hot-plug cores or RAM in a VM.
您还可以选择在虚拟机中模拟 NUMA[40]架构。NUMA 架构的基本原理是,内存不是作为一个全局内存池供所有核心使用,而是分布在靠近每个插槽的本地内存银行中。这可以带来速度提升,因为内存总线不再是瓶颈。如果您的系统具有 NUMA 架构[41],我们建议启用此选项,因为这将允许在主机系统上合理分配虚拟机资源。此选项也是在虚拟机中热插拔核心或内存所必需的。
If the NUMA option is used, it is recommended to set the number of sockets to
the number of nodes of the host system.
如果使用 NUMA 选项,建议将插槽数量设置为主机系统的节点数量。
vCPU hot-plug vCPU 热插拔
Modern operating systems introduced the capability to hot-plug and, to a
certain extent, hot-unplug CPUs in a running system. Virtualization allows us
to avoid a lot of the (physical) problems real hardware can cause in such
scenarios.
Still, this is a rather new and complicated feature, so its use should be
restricted to cases where its absolutely needed. Most of the functionality can
be replicated with other, well tested and less complicated, features, see
Resource Limits.
现代操作系统引入了在运行系统中热插拔 CPU 的能力,在一定程度上也支持热拔 CPU。虚拟化技术使我们能够避免真实硬件在此类场景中可能引发的许多(物理)问题。不过,这仍然是一个相当新颖且复杂的功能,因此其使用应限制在绝对必要的情况下。大部分功能可以通过其他经过充分测试且更简单的功能来实现,详见资源限制。
In Proxmox VE the maximal number of plugged CPUs is always cores * sockets.
To start a VM with less than this total core count of CPUs you may use the
vcpus setting, it denotes how many vCPUs should be plugged in at VM start.
在 Proxmox VE 中,最大可插入的 CPU 数量始终为核心数 * 插槽数。要启动一个 CPU 数量少于该总核心数的虚拟机,可以使用 vcpus 设置,该设置表示在虚拟机启动时应插入多少个虚拟 CPU。
Currently only this feature is only supported on Linux, a kernel newer than 3.10
is needed, a kernel newer than 4.7 is recommended.
目前该功能仅支持 Linux 系统,内核版本需高于 3.10,推荐使用高于 4.7 的内核版本。
You can use a udev rule as follow to automatically set new CPUs as online in
the guest:
您可以使用如下 udev 规则,在客户机中自动将新 CPU 设置为在线状态:
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
Save this under /etc/udev/rules.d/ as a file ending in .rules.
将此保存到 /etc/udev/rules.d/ 目录下,文件名以 .rules 结尾。
Note: CPU hot-remove is machine dependent and requires guest cooperation. The
deletion command does not guarantee CPU removal to actually happen, typically
it’s a request forwarded to guest OS using target dependent mechanism, such as
ACPI on x86/amd64.
注意:CPU 热拔除依赖于机器,并且需要客户机配合。删除命令并不保证 CPU 实际被移除,通常它是通过目标依赖机制(如 x86/amd64 上的 ACPI)转发给客户操作系统的请求。
10.2.6. Memory 10.2.6. 内存
For each VM you have the option to set a fixed size memory or asking
Proxmox VE to dynamically allocate memory based on the current RAM usage of the
host.
对于每个虚拟机,您可以选择设置固定大小的内存,或者让 Proxmox VE 根据主机当前的内存使用情况动态分配内存。
When setting memory and minimum memory to the same amount
Proxmox VE will simply allocate what you specify to your VM.
当将内存和最小内存设置为相同数值时,Proxmox VE 会直接分配您指定的内存给虚拟机。
Even when using a fixed memory size, the ballooning device gets added to the
VM, because it delivers useful information such as how much memory the guest
really uses.
In general, you should leave ballooning enabled, but if you want to disable
it (like for debugging purposes), simply uncheck Ballooning Device or set
即使使用固定内存大小,气球设备仍会被添加到虚拟机中,因为它提供了诸如客户机实际使用了多少内存等有用信息。通常,您应保持气球功能启用,但如果您想禁用它(例如用于调试目的),只需取消选中气球设备或在配置中设置
balloon: 0
in the configuration. 。
自动内存分配
When setting the minimum memory lower than memory, Proxmox VE will make sure that
the minimum amount you specified is always available to the VM, and if RAM
usage on the host is below a certain target percentage, will dynamically add
memory to the guest up to the maximum memory specified. The target percentage
defaults to 80% and can be configured in the node options.
当设置的最小内存低于内存时,Proxmox VE 会确保您指定的最小内存始终可用于虚拟机,如果主机上的内存使用率低于某个目标百分比,则会动态向客户机添加内存,直到达到指定的最大内存。目标百分比默认为 80%,可以在节点选项中进行配置。
When the host is running low on RAM, the VM will then release some memory
back to the host, swapping running processes if needed and starting the oom
killer in last resort. The passing around of memory between host and guest is
done via a special balloon kernel driver running inside the guest, which will
grab or release memory pages from the host.
[42]
当主机内存不足时,虚拟机将释放部分内存回主机,必要时交换运行中的进程,并在最后手段启动 oom 杀手。主机和客户机之间的内存传递是通过客户机内运行的特殊气球内核驱动完成的,该驱动会从主机获取或释放内存页。[ 42]
When multiple VMs use the autoallocate facility, it is possible to set a
Shares coefficient which indicates the relative amount of the free host memory
that each VM should take. Suppose for instance you have four VMs, three of them
running an HTTP server and the last one is a database server.
The host is configured to target 80% RAM usage. To cache more
database blocks in the database server RAM, you would like to prioritize the
database VM when spare RAM is available. For this you assign a Shares property
of 3000 to the database VM, leaving the other VMs to the Shares default setting
of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
* 80/100 - 16 = 9GB RAM to be allocated to the VMs on top of their configured
minimum memory amount. The database VM will benefit from 9 * 3000 / (3000
+ 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server from 1.5 GB.
当多个虚拟机使用自动分配功能时,可以设置一个 Shares 系数,用以表示每个虚拟机应占用的主机空闲内存的相对比例。假设你有四台虚拟机,其中三台运行 HTTP 服务器,最后一台是数据库服务器。主机配置为目标使用 80% 的内存。为了在数据库服务器的内存中缓存更多的数据库块,你希望在有空闲内存时优先分配给数据库虚拟机。为此,你给数据库虚拟机分配了 Shares 属性为 3000,其他虚拟机则保持默认的 Shares 设置为 1000。主机服务器有 32GB 内存,目前使用了 16GB,剩余可分配给虚拟机的内存为 32 * 80/100 - 16 = 9GB,且是在它们配置的最小内存基础上额外分配的。数据库虚拟机将获得 9 * 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB 的额外内存,每个 HTTP 服务器则获得 1.5 GB。
All Linux distributions released after 2010 have the balloon kernel driver
included. For Windows OSes, the balloon driver needs to be added manually and can
incur a slowdown of the guest, so we don’t recommend using it on critical
systems.
所有 2010 年以后发布的 Linux 发行版都包含了气球内核驱动。对于 Windows 操作系统,气球驱动需要手动添加,并且可能会导致客户机性能下降,因此我们不建议在关键系统上使用它。
When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
of RAM available to the host.
为虚拟机分配内存时,一个好的经验法则是始终为主机保留 1GB 的可用内存。
10.2.7. Memory Encryption
10.2.7. 内存加密
AMD SEV
SEV (Secure Encrypted Virtualization) enables memory encryption per VM using
AES-128 encryption and the AMD Secure Processor.
SEV(安全加密虚拟化)使用 AES-128 加密和 AMD 安全处理器,实现每个虚拟机的内存加密。
SEV-ES (Secure Encrypted Virtualization - Encrypted State) in addition encrypts
all CPU register contents, to prevent leakage of information to the hypervisor.
SEV-ES(安全加密虚拟化 - 加密状态)此外还加密所有 CPU 寄存器内容,以防止信息泄露给管理程序。
SEV-SNP (Secure Encrypted Virtualization - Secure Nested Paging) also attempts
to prevent software-based integrity attacks. See the
AMD SEV SNP white paper for more information.
SEV-SNP(安全加密虚拟化 - 安全嵌套分页)还试图防止基于软件的完整性攻击。更多信息请参见 AMD SEV SNP 白皮书。
Host Requirements: 主机要求:
-
AMD EPYC CPU
-
SEV-ES is only supported on AMD EPYC 7002 series and newer EPYC CPUs
SEV-ES 仅支持 AMD EPYC 7002 系列及更新的 EPYC 处理器 -
SEV-SNP is only supported on AMD EPYC 7003 series and newer EPYC CPUs
SEV-SNP 仅支持 AMD EPYC 7003 系列及更新的 EPYC 处理器 -
SEV-SNP requires host kernel version 6.11 or higher.
SEV-SNP 需要主机内核版本 6.11 或更高。 -
configure AMD memory encryption in the BIOS settings of the host machine
在主机的 BIOS 设置中配置 AMD 内存加密 -
add "kvm_amd.sev=1" to kernel parameters if not enabled by default
如果默认未启用,请将 "kvm_amd.sev=1" 添加到内核参数中 -
add "mem_encrypt=on" to kernel parameters if you want to encrypt memory on the host (SME) see https://www.kernel.org/doc/Documentation/x86/amd-memory-encryption.txt
如果您想在主机上加密内存(SME),请将 "mem_encrypt=on" 添加到内核参数中,详见 https://www.kernel.org/doc/Documentation/x86/amd-memory-encryption.txt -
maybe increase SWIOTLB see https://github.com/AMDESE/AMDSEV#faq-4
可能需要增加 SWIOTLB,详见 https://github.com/AMDESE/AMDSEV#faq-4
To check if SEV is enabled on the host search for sev in dmesg and print out
the SEV kernel parameter of kvm_amd:
要检查主机上是否启用了 SEV,请在 dmesg 中搜索 sev 并打印 kvm_amd 的 SEV 内核参数:
# dmesg | grep -i sev [...] ccp 0000:45:00.1: sev enabled [...] ccp 0000:45:00.1: SEV API: <buildversion> [...] SEV supported: <number> ASIDs [...] SEV-ES supported: <number> ASIDs # cat /sys/module/kvm_amd/parameters/sev Y
Guest Requirements: 客户机要求:
-
edk2-OVMF
-
advisable to use Q35
建议使用 Q35 -
The guest operating system must contain SEV-support.
客户机操作系统必须包含 SEV 支持。
Limitations: 限制:
-
Because the memory is encrypted the memory usage on host is always wrong.
由于内存是加密的,主机上的内存使用情况总是不准确的。 -
Operations that involve saving or restoring memory like snapshots & live migration do not work yet or are attackable.
涉及保存或恢复内存的操作,如快照和实时迁移,目前尚不可用或存在安全风险。 -
PCI passthrough is not supported.
不支持 PCI 直通。 -
SEV-ES & SEV-SNP are very experimental.
SEV-ES 和 SEV-SNP 都非常实验性。 -
EFI disks are not supported with SEV-SNP.
SEV-SNP 不支持 EFI 磁盘。 -
With SEV-SNP, the reboot command inside a VM simply shuts down the VM.
使用 SEV-SNP 时,虚拟机内的重启命令只是关闭虚拟机。
Example Configuration (SEV):
示例配置(SEV):
# qm set <vmid> -amd-sev type=std,no-debug=1,no-key-sharing=1,kernel-hashes=1
The type defines the encryption technology ("type=" is not necessary).
Available options are std, es & snp.
类型定义了加密技术("type="不是必需的)。可用选项有 std、es 和 snp。
The QEMU policy parameter gets calculated with the no-debug and
no-key-sharing parameters. These parameters correspond to policy-bit 0 and 1.
If type is es the policy-bit 2 is set to 1 so that SEV-ES is enabled.
Policy-bit 3 (nosend) is always set to 1 to prevent migration-attacks. For more
information on how to calculate the policy see:
AMD SEV API Specification Chapter 3
QEMU 策略参数通过 no-debug 和 no-key-sharing 参数计算得出。这些参数对应策略位 0 和 1。如果类型是 es,则策略位 2 被设置为 1,以启用 SEV-ES。策略位 3(nosend)始终设置为 1,以防止迁移攻击。有关如何计算策略的更多信息,请参见:AMD SEV API 规范第 3 章
The kernel-hashes option is off per default for backward compatibility with
older OVMF images and guests that do not measure the kernel/initrd.
See https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg02598.html
kernel-hashes 选项默认关闭,以兼容较旧的 OVMF 镜像和不测量内核/initrd 的客户机。详见 https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg02598.html
Check if SEV is working in the VM
检查 SEV 是否在虚拟机中正常工作
Method 1 - dmesg: 方法 1 - dmesg:
Output should look like this.
输出应如下所示。
# dmesg | grep -i sev AMD Memory Encryption Features active: SEV
Method 2 - MSR 0xc0010131 (MSR_AMD64_SEV):
方法 2 - MSR 0xc0010131(MSR_AMD64_SEV):
Output should be 1. 输出应为 1。
# apt install msr-tools # modprobe msr # rdmsr -a 0xc0010131 1
Example Configuration (SEV-SNP):
示例配置(SEV-SNP):
# qm set <vmid> -amd-sev type=snp,allow-smt=1,no-debug=1,kernel-hashes=1
The allow-smt policy-bit is set by default. If you disable it by setting
allow-smt to 0, SMT must be disabled on the host in order for the VM to run.
allow-smt 策略位默认设置为启用。如果通过将 allow-smt 设置为 0 来禁用它,则必须在主机上禁用 SMT,虚拟机才能运行。
Check if SEV-SNP is working in the VM
检查虚拟机中 SEV-SNP 是否正常工作
# dmesg | grep -i snp Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP SEV: Using SNP CPUID table, 29 entries present. SEV: SNP guest platform device initialized.
Links: 链接:
10.2.8. Network Device 10.2.8. 网络设备
Each VM can have many Network interface controllers (NIC), of four different
types:
每个虚拟机可以拥有多张网络接口控制器(NIC),分为四种不同类型:
-
Intel E1000 is the default, and emulates an Intel Gigabit network card.
Intel E1000 是默认类型,模拟一张 Intel 千兆网络卡。 -
the VirtIO paravirtualized NIC should be used if you aim for maximum performance. Like all VirtIO devices, the guest OS should have the proper driver installed.
如果您追求最大性能,应使用 VirtIO 半虚拟化网卡。像所有 VirtIO 设备一样,客户操作系统应安装相应的驱动程序。 -
the Realtek 8139 emulates an older 100 MB/s network card, and should only be used when emulating older operating systems ( released before 2002 )
Realtek 8139 模拟的是一款较旧的 100 MB/s 网卡,仅应在模拟 2002 年之前发布的旧操作系统时使用。 -
the vmxnet3 is another paravirtualized device, which should only be used when importing a VM from another hypervisor.
vmxnet3 是另一种半虚拟化设备,仅在从其他虚拟机管理程序导入虚拟机时使用。
Proxmox VE will generate for each NIC a random MAC address, so that your VM is
addressable on Ethernet networks.
Proxmox VE 会为每个网卡生成一个随机 MAC 地址,以便您的虚拟机在以太网网络上可寻址。
The NIC you added to the VM can follow one of two different models:
您添加到虚拟机的网卡可以遵循以下两种不同的模式之一:
-
in the default Bridged mode each virtual NIC is backed on the host by a tap device, ( a software loopback device simulating an Ethernet NIC ). This tap device is added to a bridge, by default vmbr0 in Proxmox VE. In this mode, VMs have direct access to the Ethernet LAN on which the host is located.
在默认的桥接模式下,每个虚拟网卡在主机上由一个 tap 设备支持(一个模拟以太网网卡的软件回环设备)。该 tap 设备会被添加到一个桥接器中,Proxmox VE 中默认是 vmbr0。在此模式下,虚拟机可以直接访问主机所在的以太网局域网。 -
in the alternative NAT mode, each virtual NIC will only communicate with the QEMU user networking stack, where a built-in router and DHCP server can provide network access. This built-in DHCP will serve addresses in the private 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and should only be used for testing. This mode is only available via CLI or the API, but not via the web UI.
在另一种 NAT 模式下,每个虚拟网卡只与 QEMU 用户网络堆栈通信,该堆栈内置了路由器和 DHCP 服务器以提供网络访问。该内置 DHCP 服务器会分配 10.0.2.0/24 私有地址范围内的地址。NAT 模式比桥接模式慢得多,仅应在测试时使用。此模式仅可通过命令行界面或 API 使用,网页 UI 不支持。
You can also skip adding a network device when creating a VM by selecting No
network device.
创建虚拟机时,您也可以选择不添加网络设备,选择“无网络设备”。
You can overwrite the MTU setting for each VM network device. The option
mtu=1 represents a special case, in which the MTU value will be inherited
from the underlying bridge.
This option is only available for VirtIO network devices.
您可以覆盖每个虚拟机网络设备的 MTU 设置。选项 mtu=1 表示一种特殊情况,此时 MTU 值将继承自底层桥接。此选项仅适用于 VirtIO 网络设备。
If you are using the VirtIO driver, you can optionally activate the
Multiqueue option. This option allows the guest OS to process networking
packets using multiple virtual CPUs, providing an increase in the total number
of packets transferred.
如果您使用的是 VirtIO 驱动程序,可以选择激活多队列选项。此选项允许客户操作系统使用多个虚拟 CPU 处理网络数据包,从而增加传输的数据包总数。
When using the VirtIO driver with Proxmox VE, each NIC network queue is passed to the
host kernel, where the queue will be processed by a kernel thread spawned by the
vhost driver. With this option activated, it is possible to pass multiple
network queues to the host kernel for each NIC.
在 Proxmox VE 中使用 VirtIO 驱动时,每个网卡的网络队列都会传递给主机内核,由 vhost 驱动生成的内核线程处理该队列。启用此选项后,可以为每个网卡传递多个网络队列给主机内核。
When using Multiqueue, it is recommended to set it to a value equal to the
number of vCPUs of your guest. Remember that the number of vCPUs is the number
of sockets times the number of cores configured for the VM. You also need to set
the number of multi-purpose channels on each VirtIO NIC in the VM with this
ethtool command:
使用多队列时,建议将其设置为与您的客户机 vCPU 数量相等的值。请记住,vCPU 的数量是虚拟机配置的插槽数乘以核心数。您还需要使用以下 ethtool 命令设置虚拟机中每个 VirtIO 网卡的多用途通道数量:
ethtool -L ens1 combined X
where X is the number of the number of vCPUs of the VM.
其中 X 是虚拟机 vCPU 的数量。
To configure a Windows guest for Multiqueue install the
Redhat VirtIO Ethernet
Adapter drivers, then adapt the NIC’s configuration as follows. Open the
device manager, right click the NIC under "Network adapters", and select
"Properties". Then open the "Advanced" tab and select "Receive Side Scaling"
from the list on the left. Make sure it is set to "Enabled". Next, navigate to
"Maximum number of RSS Queues" in the list and set it to the number of vCPUs of
your VM. Once you verified that the settings are correct, click "OK" to confirm
them.
要为 Windows 客户机配置多队列,请安装 Redhat VirtIO 以太网适配器驱动程序,然后按如下方式调整网卡配置。打开设备管理器,右键点击“网络适配器”下的网卡,选择“属性”。然后打开“高级”选项卡,从左侧列表中选择“接收端扩展(Receive Side Scaling)”,确保其设置为“启用”。接下来,找到列表中的“最大 RSS 队列数”,并将其设置为虚拟机的 vCPU 数量。确认设置无误后,点击“确定”以保存。
You should note that setting the Multiqueue parameter to a value greater
than one will increase the CPU load on the host and guest systems as the
traffic increases. We recommend to set this option only when the VM has to
process a great number of incoming connections, such as when the VM is running
as a router, reverse proxy or a busy HTTP server doing long polling.
您应注意,将 Multiqueue 参数设置为大于 1 的值会随着流量增加而增加主机和客户机系统的 CPU 负载。我们建议仅在虚拟机需要处理大量传入连接时设置此选项,例如当虚拟机作为路由器、反向代理或进行长轮询的繁忙 HTTP 服务器运行时。
10.2.9. Display 10.2.9. 显示
QEMU can virtualize a few types of VGA hardware. Some examples are:
QEMU 可以虚拟化几种类型的 VGA 硬件。一些示例包括:
-
std, the default, emulates a card with Bochs VBE extensions.
std,默认,模拟带有 Bochs VBE 扩展的显卡。 -
cirrus, this was once the default, it emulates a very old hardware module with all its problems. This display type should only be used if really necessary [43], for example, if using Windows XP or earlier
cirrus,这曾经是默认设置,它模拟了一个非常老旧的硬件模块及其所有问题。只有在确实必要时才应使用这种显示类型[43],例如使用 Windows XP 或更早版本时。 -
vmware, is a VMWare SVGA-II compatible adapter.
vmware,是一个兼容 VMWare SVGA-II 的适配器。 -
qxl, is the QXL paravirtualized graphics card. Selecting this also enables SPICE (a remote viewer protocol) for the VM.
qxl,是 QXL 半虚拟化显卡。选择此项还会为虚拟机启用 SPICE(一种远程查看协议)。 -
virtio-gl, often named VirGL is a virtual 3D GPU for use inside VMs that can offload workloads to the host GPU without requiring special (expensive) models and drivers and neither binding the host GPU completely, allowing reuse between multiple guests and or the host.
virtio-gl,通常称为 VirGL,是用于虚拟机内的虚拟 3D GPU,可以将工作负载卸载到主机 GPU,无需特殊(昂贵的)型号和驱动程序,也不会完全绑定主机 GPU,允许在多个客户机和/或主机之间重复使用。VirGL support needs some extra libraries that aren’t installed by default due to being relatively big and also not available as open source for all GPU models/vendors. For most setups you’ll just need to do: apt install libgl1 libegl1
VirGL 支持需要一些额外的库,这些库默认未安装,因为它们相对较大,并且并非所有 GPU 型号/厂商都提供开源版本。对于大多数设置,你只需执行:apt install libgl1 libegl1
You can edit the amount of memory given to the virtual GPU, by setting
the memory option. This can enable higher resolutions inside the VM,
especially with SPICE/QXL.
你可以通过设置 memory 选项来编辑分配给虚拟 GPU 的内存量。这可以在虚拟机内启用更高的分辨率,尤其是在使用 SPICE/QXL 时。
As the memory is reserved by display device, selecting Multi-Monitor mode
for SPICE (such as qxl2 for dual monitors) has some implications:
由于内存是由显示设备保留的,选择 SPICE 的多显示器模式(例如用于双显示器的 qxl2)会有一些影响:
-
Windows needs a device for each monitor, so if your ostype is some version of Windows, Proxmox VE gives the VM an extra device per monitor. Each device gets the specified amount of memory.
Windows 需要为每个显示器提供一个设备,因此如果你的操作系统类型是某个版本的 Windows,Proxmox VE 会为虚拟机的每个显示器提供一个额外的设备。每个设备都会获得指定数量的内存。 -
Linux VMs, can always enable more virtual monitors, but selecting a Multi-Monitor mode multiplies the memory given to the device with the number of monitors.
Linux 虚拟机,可以随时启用更多虚拟显示器,但选择多显示器模式会将分配给设备的内存乘以显示器数量。
Selecting serialX as display type disables the VGA output, and redirects
the Web Console to the selected serial port. A configured display memory
setting will be ignored in that case.
选择 serialX 作为显示类型会禁用 VGA 输出,并将 Web 控制台重定向到所选的串口。在这种情况下,配置的显示内存设置将被忽略。
You can enable the VNC clipboard by setting clipboard to vnc.
您可以通过将剪贴板设置为 vnc 来启用 VNC 剪贴板。
# qm set <vmid> -vga <displaytype>,clipboard=vnc
In order to use the clipboard feature, you must first install the
SPICE guest tools. On Debian-based distributions, this can be achieved
by installing spice-vdagent. For other Operating Systems search for it
in the official repositories or see: https://www.spice-space.org/download.html
为了使用剪贴板功能,您必须先安装 SPICE 客户端工具。在基于 Debian 的发行版中,可以通过安装 spice-vdagent 来实现。对于其他操作系统,请在官方仓库中搜索,或参见:https://www.spice-space.org/download.html
Once you have installed the spice guest tools, you can use the VNC clipboard
function (e.g. in the noVNC console panel). However, if you’re using
SPICE, virtio or virgl, you’ll need to choose which clipboard to use.
This is because the default SPICE clipboard will be replaced by the
VNC clipboard, if clipboard is set to vnc.
安装了 spice 客户端工具后,您可以使用 VNC 剪贴板功能(例如在 noVNC 控制台面板中)。但是,如果您使用的是 SPICE、virtio 或 virgl,则需要选择使用哪种剪贴板。这是因为如果剪贴板设置为 vnc,默认的 SPICE 剪贴板将被 VNC 剪贴板替代。
10.2.10. USB Passthrough
10.2.10. USB 直通
There are two different types of USB passthrough devices:
USB 直通设备有两种不同类型:
-
Host USB passthrough 主机 USB 直通
-
SPICE USB passthrough SPICE USB 直通
Host USB passthrough works by giving a VM a USB device of the host.
This can either be done via the vendor- and product-id, or
via the host bus and port.
主机 USB 直通通过将主机的 USB 设备分配给虚拟机来实现。可以通过供应商 ID 和产品 ID,或者通过主机总线和端口来完成。
The vendor/product-id looks like this: 0123:abcd,
where 0123 is the id of the vendor, and abcd is the id
of the product, meaning two pieces of the same usb device
have the same id.
供应商/产品 ID 的格式如下:0123:abcd,其中 0123 是供应商 ID,abcd 是产品 ID,表示两个相同的 USB 设备具有相同的 ID。
The bus/port looks like this: 1-2.3.4, where 1 is the bus
and 2.3.4 is the port path. This represents the physical
ports of your host (depending of the internal order of the
usb controllers).
总线/端口的格式如下:1-2.3.4,其中 1 是总线,2.3.4 是端口路径。这表示主机的物理端口(取决于 USB 控制器的内部顺序)。
If a device is present in a VM configuration when the VM starts up,
but the device is not present in the host, the VM can boot without problems.
As soon as the device/port is available in the host, it gets passed through.
如果设备在虚拟机配置中存在且虚拟机启动时该设备未在主机上存在,虚拟机仍然可以正常启动。一旦设备/端口在主机上可用,它就会被直通。
|
|
Using this kind of USB passthrough means that you cannot move
a VM online to another host, since the hardware is only available
on the host the VM is currently residing. 使用这种类型的 USB 直通意味着你无法在线迁移虚拟机到另一台主机,因为硬件仅在虚拟机当前所在的主机上可用。 |
The second type of passthrough is SPICE USB passthrough. If you add one or more
SPICE USB ports to your VM, you can dynamically pass a local USB device from
your SPICE client through to the VM. This can be useful to redirect an input
device or hardware dongle temporarily.
第二种直通类型是 SPICE USB 直通。如果你为虚拟机添加一个或多个 SPICE USB 端口,可以动态地将本地 USB 设备从 SPICE 客户端传递到虚拟机。这对于临时重定向输入设备或硬件加密狗非常有用。
It is also possible to map devices on a cluster level, so that they can be
properly used with HA and hardware changes are detected and non root users
can configure them. See Resource Mapping
for details on that.
也可以在集群级别映射设备,以便它们能够被高可用性(HA)正确使用,硬件更改能够被检测到,且非 root 用户可以配置它们。有关详细信息,请参见资源映射。
10.2.11. BIOS and UEFI
10.2.11. BIOS 和 UEFI
In order to properly emulate a computer, QEMU needs to use a firmware.
Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
first steps when booting a VM. It is responsible for doing basic hardware
initialization and for providing an interface to the firmware and hardware for
the operating system. By default QEMU uses SeaBIOS for this, which is an
open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
standard setups.
为了正确模拟一台计算机,QEMU 需要使用固件。在常见的 PC 上,这通常被称为 BIOS 或 (U)EFI,是启动虚拟机时执行的第一步之一。它负责进行基本的硬件初始化,并为操作系统提供固件和硬件的接口。默认情况下,QEMU 使用 SeaBIOS,这是一个开源的 x86 BIOS 实现。SeaBIOS 是大多数标准配置的良好选择。
Some operating systems (such as Windows 11) may require use of an UEFI
compatible implementation. In such cases, you must use OVMF instead,
which is an open-source UEFI implementation. [44]
某些操作系统(例如 Windows 11)可能需要使用兼容 UEFI 的实现。在这种情况下,必须改用 OVMF,它是一个开源的 UEFI 实现。[44]
There are other scenarios in which the SeaBIOS may not be the ideal firmware to
boot from, for example if you want to do VGA passthrough. [45]
还有其他情况下,SeaBIOS 可能不是理想的启动固件,例如如果你想使用 VGA 直通。[45]
If you want to use OVMF, there are several things to consider:
如果你想使用 OVMF,需要考虑以下几点:
In order to save things like the boot order, there needs to be an EFI Disk.
This disk will be included in backups and snapshots, and there can only be one.
为了保存启动顺序等信息,需要有一个 EFI 磁盘。该磁盘会包含在备份和快照中,并且只能有一个。
You can create such a disk with the following command:
你可以使用以下命令创建这样的磁盘:
# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
Where <storage> is the storage where you want to have the disk, and
<format> is a format which the storage supports. Alternatively, you can
create such a disk through the web interface with Add → EFI Disk in the
hardware section of a VM.
其中 <storage> 是你想要存放磁盘的存储位置,<format> 是该存储支持的格式。或者,你也可以通过网页界面,在虚拟机的硬件部分选择添加 → EFI 磁盘来创建这样的磁盘。
The efitype option specifies which version of the OVMF firmware should be
used. For new VMs, this should always be 4m, as it supports Secure Boot and
has more space allocated to support future development (this is the default in
the GUI).
efitype 选项指定应使用哪个版本的 OVMF 固件。对于新虚拟机,这应始终设置为 4m,因为它支持安全启动,并且分配了更多空间以支持未来开发(这是图形界面中的默认设置)。
pre-enroll-keys specifies if the efidisk should come pre-loaded with
distribution-specific and Microsoft Standard Secure Boot keys. It also enables
Secure Boot by default (though it can still be disabled in the OVMF menu within
the VM).
pre-enroll-keys 指定 efidisk 是否应预先加载特定发行版和 Microsoft 标准安全启动密钥。它还默认启用安全启动(尽管仍可在虚拟机内的 OVMF 菜单中禁用)。
|
|
If you want to start using Secure Boot in an existing VM (that still uses
a 2m efidisk), you need to recreate the efidisk. To do so, delete the old one
(qm set <vmid> -delete efidisk0) and add a new one as described above. This
will reset any custom configurations you have made in the OVMF menu! 如果你想在现有虚拟机(仍使用 2m efidisk)中开始使用安全启动,则需要重新创建 efidisk。为此,删除旧的(qm set <vmid> -delete efidisk0),然后按上述方法添加新的。这将重置你在 OVMF 菜单中所做的任何自定义配置! |
When using OVMF with a virtual display (without VGA passthrough),
you need to set the client resolution in the OVMF menu (which you can reach
with a press of the ESC button during boot), or you have to choose
SPICE as the display type.
使用带有虚拟显示器的 OVMF(无 VGA 直通)时,您需要在 OVMF 菜单中设置客户端分辨率(可在启动时按 ESC 键进入),或者必须选择 SPICE 作为显示类型。
10.2.12. Trusted Platform Module (TPM)
10.2.12. 可信平台模块(TPM)
A Trusted Platform Module is a device which stores secret data - such as
encryption keys - securely and provides tamper-resistance functions for
validating system boot.
可信平台模块是一种设备,用于安全存储秘密数据——例如加密密钥——并提供防篡改功能以验证系统启动。
Certain operating systems (such as Windows 11) require such a device to be
attached to a machine (be it physical or virtual).
某些操作系统(如 Windows 11)要求必须将此类设备连接到机器上(无论是物理机还是虚拟机)。
A TPM is added by specifying a tpmstate volume. This works similar to an
efidisk, in that it cannot be changed (only removed) once created. You can add
one via the following command:
通过指定一个 tpmstate 卷来添加 TPM。这与 efidisk 类似,一旦创建后无法更改(只能删除)。你可以通过以下命令添加:
# qm set <vmid> -tpmstate0 <storage>:1,version=<version>
Where <storage> is the storage you want to put the state on, and <version>
is either v1.2 or v2.0. You can also add one via the web interface, by
choosing Add → TPM State in the hardware section of a VM.
其中 <storage> 是你想存放状态的存储位置,<version> 可以是 v1.2 或 v2.0。你也可以通过网页界面添加,在虚拟机的硬件部分选择 添加 → TPM 状态。
The v2.0 TPM spec is newer and better supported, so unless you have a specific
implementation that requires a v1.2 TPM, it should be preferred.
v2.0 TPM 规范较新且支持更好,除非你有特定实现需要使用 v1.2 TPM,否则应优先选择 v2.0。
|
|
Compared to a physical TPM, an emulated one does not provide any real
security benefits. The point of a TPM is that the data on it cannot be modified
easily, except via commands specified as part of the TPM spec. Since with an
emulated device the data storage happens on a regular volume, it can potentially
be edited by anyone with access to it. 与物理 TPM 相比,模拟 TPM 并不提供任何真正的安全优势。TPM 的意义在于其上的数据不能轻易被修改,除非通过 TPM 规范中指定的命令。由于模拟设备的数据存储在普通卷上,任何有访问权限的人都可能编辑这些数据。 |
10.2.13. Inter-VM shared memory
10.2.13. 虚拟机间共享内存
You can add an Inter-VM shared memory device (ivshmem), which allows one to
share memory between the host and a guest, or also between multiple guests.
您可以添加一个虚拟机间共享内存设备(ivshmem),它允许在主机和虚拟机之间,或者多个虚拟机之间共享内存。
To add such a device, you can use qm:
要添加此类设备,您可以使用 qm 命令:
# qm set <vmid> -ivshmem size=32,name=foo
Where the size is in MiB. The file will be located under
/dev/shm/pve-shm-$name (the default name is the vmid).
其中大小以 MiB 为单位。该文件将位于 /dev/shm/pve-shm-$name 下(默认名称为 vmid)。
|
|
Currently the device will get deleted as soon as any VM using it got
shutdown or stopped. Open connections will still persist, but new connections
to the exact same device cannot be made anymore. 当前设备将在任何使用该设备的虚拟机关闭或停止后立即被删除。已打开的连接仍将保持,但无法再对完全相同的设备建立新的连接。 |
A use case for such a device is the Looking Glass
[46] project, which enables high
performance, low-latency display mirroring between host and guest.
此类设备的一个使用案例是 Looking Glass [46]项目,该项目实现了主机与客户机之间的高性能、低延迟显示镜像。
10.2.14. Audio Device 10.2.14. 音频设备
To add an audio device run the following command:
要添加音频设备,请运行以下命令:
qm set <vmid> -audio0 device=<device>
Supported audio devices are:
支持的音频设备有:
-
ich9-intel-hda: Intel HD Audio Controller, emulates ICH9
ich9-intel-hda:Intel HD 音频控制器,模拟 ICH9 -
intel-hda: Intel HD Audio Controller, emulates ICH6
intel-hda:Intel HD 音频控制器,模拟 ICH6 -
AC97: Audio Codec '97, useful for older operating systems like Windows XP
AC97:音频编解码器 '97,适用于像 Windows XP 这样的旧操作系统
There are two backends available:
有两种后端可用:
-
spice
-
none
The spice backend can be used in combination with SPICE while
the none backend can be useful if an audio device is needed in the VM for some
software to work. To use the physical audio device of the host use device
passthrough (see PCI Passthrough and
USB Passthrough). Remote protocols like Microsoft’s RDP
have options to play sound.
spice 后端可以与 SPICE 一起使用,而 none 后端在虚拟机中需要音频设备以使某些软件正常工作时非常有用。要使用主机的物理音频设备,请使用设备直通(参见 PCI 直通和 USB 直通)。像 Microsoft 的 RDP 这样的远程协议具有播放声音的选项。
10.2.15. VirtIO RNG
A RNG (Random Number Generator) is a device providing entropy (randomness) to
a system. A virtual hardware-RNG can be used to provide such entropy from the
host system to a guest VM. This helps to avoid entropy starvation problems in
the guest (a situation where not enough entropy is available and the system may
slow down or run into problems), especially during the guests boot process.
RNG(随机数生成器)是一种为系统提供熵(随机性)的设备。虚拟硬件 RNG 可以用来将主机系统的熵提供给客户虚拟机。这有助于避免客户机中出现熵匮乏的问题(即熵不足,系统可能变慢或出现故障的情况),尤其是在客户机启动过程中。
To add a VirtIO-based emulated RNG, run the following command:
要添加基于 VirtIO 的模拟 RNG,请运行以下命令:
qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
source specifies where entropy is read from on the host and has to be one of
the following:
source 指定从主机的哪个位置读取熵,必须是以下之一:
-
/dev/urandom: Non-blocking kernel entropy pool (preferred)
/ dev/urandom:非阻塞内核熵池(首选) -
/dev/random: Blocking kernel pool (not recommended, can lead to entropy starvation on the host system)
/ dev/random:阻塞内核熵池(不推荐,可能导致主机系统熵耗尽) -
/dev/hwrng: To pass through a hardware RNG attached to the host (if multiple are available, the one selected in /sys/devices/virtual/misc/hw_random/rng_current will be used)
/ dev/hwrng:用于直通连接到主机的硬件随机数生成器(如果有多个,将使用 /sys/devices/virtual/misc/hw_random/rng_current 中选择的那个)
A limit can be specified via the max_bytes and period parameters, they are
read as max_bytes per period in milliseconds. However, it does not represent
a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
available on a 1 second timer, not that 1 KiB is streamed to the guest over the
course of one second. Reducing the period can thus be used to inject entropy
into the guest at a faster rate.
可以通过 max_bytes 和 period 参数指定限制,它们被读取为每个周期(以毫秒为单位)的最大字节数。然而,这并不表示线性关系:1024B/1000ms 意味着在 1 秒定时器上最多可用 1 KiB 数据,而不是在一秒内向客户机流式传输 1 KiB。减少周期可以用来以更快的速率向客户机注入熵。
By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
recommended to always use a limiter to avoid guests using too many host
resources. If desired, a value of 0 for max_bytes can be used to disable
all limits.
默认情况下,限制设置为每 1000 毫秒 1024 字节(1 KiB/s)。建议始终使用限速器,以避免客户机使用过多的主机资源。如果需要,可以将 max_bytes 设置为 0 以禁用所有限制。
10.2.16. Virtiofs
Virtiofs is a shared filesystem designed for virtual environments. It allows to
share a directory tree available on the host by mounting it within VMs. It does
not use the network stack and aims to offer similar performance and semantics as
the source filesystem.
Virtiofs 是为虚拟环境设计的共享文件系统。它允许通过在虚拟机内挂载主机上的目录树来共享该目录树。它不使用网络堆栈,旨在提供与源文件系统类似的性能和语义。
To use virtiofs, the virtiofsd daemon
needs to run in the background. This happens automatically in Proxmox VE when
starting a VM using a virtiofs mount.
要使用 virtiofs,需要在后台运行 virtiofsd 守护进程。在 Proxmox VE 中,当使用 virtiofs 挂载启动虚拟机时,这一过程会自动发生。
Linux VMs with kernel >=5.4 support virtiofs by default
(virtiofs kernel module), but some
features require a newer kernel.
内核版本≥5.4 的 Linux 虚拟机默认支持 virtiofs(virtiofs 内核模块),但某些功能需要更新的内核。
To use virtiofs, ensure that virtiofsd is installed on the Proxmox VE host:
要使用 virtiofs,请确保 Proxmox VE 主机上已安装 virtiofsd:
apt install virtiofsd
There is a
guide
available on how to utilize virtiofs in Windows VMs.
有一份关于如何在 Windows 虚拟机中使用 virtiofs 的指南。
Known Limitations 已知限制
-
If virtiofsd crashes, its mount point will hang in the VM until the VM is completely stopped.
如果 virtiofsd 崩溃,其挂载点将在虚拟机中挂起,直到虚拟机完全停止。 -
virtiofsd not responding may result in a hanging mount in the VM, similar to an unreachable NFS.
virtiofsd 无响应可能导致虚拟机中的挂载点挂起,类似于无法访问的 NFS。 -
Memory hotplug does not work in combination with virtiofs (also results in hanging access).
内存热插拔与 virtiofs 结合使用时不起作用(也会导致访问挂起)。 -
Memory related features such as live migration, snapshots, and hibernate are not available with virtiofs devices.
与内存相关的功能,如实时迁移、快照和休眠,在 virtiofs 设备上不可用。 -
Windows cannot understand ACLs in the context of virtiofs. Therefore, do not expose ACLs for Windows VMs, otherwise the virtiofs device will not be visible within the VM.
Windows 无法理解 virtiofs 上下文中的 ACL。因此,不要为 Windows 虚拟机暴露 ACL,否则 virtiofs 设备将在虚拟机内不可见。
Add Mapping for Shared Directories
添加共享目录映射
To add a mapping for a shared directory, you can use the API directly with
pvesh as described in the Resource Mapping section:
要添加共享目录映射,可以按照资源映射部分的描述,直接使用 pvesh 的 API:
pvesh create /cluster/mapping/dir --id dir1 \
--map node=node1,path=/path/to/share1 \
--map node=node2,path=/path/to/share2 \
Add virtiofs to a VM
向虚拟机添加 virtiofs
To share a directory using virtiofs, add the parameter virtiofs<N> (N can be
anything between 0 and 9) to the VM config and use a directory ID (dirid) that
has been configured in the resource mapping. Additionally, you can set the
cache option to either always, never, metadata, or auto (default:
auto), depending on your requirements. How the different caching modes behave
can be read here under the "Caching Modes"
section.
要使用 virtiofs 共享目录,请在虚拟机配置中添加参数 virtiofs<N>(N 可以是 0 到 9 之间的任意数字),并使用已在资源映射中配置的目录 ID(dirid)。此外,您可以根据需求将缓存选项设置为 always、never、metadata 或 auto(默认值:auto)。不同缓存模式的行为可以在此处的“缓存模式”部分查看。
The virtiofsd supports ACL and xattr passthrough (can be enabled with the
expose-acl and expose-xattr options), allowing the guest to access ACLs and
xattrs if the underlying host filesystem supports them, but they must also be
compatible with the guest filesystem (for example most Linux filesystems support
ACLs, while Windows filesystems do not).
virtiofsd 支持 ACL 和 xattr 透传(可通过 expose-acl 和 expose-xattr 选项启用),允许来宾访问 ACL 和 xattr,前提是底层主机文件系统支持它们,但它们也必须与来宾文件系统兼容(例如,大多数 Linux 文件系统支持 ACL,而 Windows 文件系统不支持)。
The expose-acl option automatically implies expose-xattr, that is, it makes
no difference if you set expose-xattr to 0 if expose-acl is set to 1.
expose-acl 选项自动包含 expose-xattr,也就是说,如果 expose-acl 设置为 1,则即使将 expose-xattr 设置为 0 也没有区别。
If you want virtiofs to honor the O_DIRECT flag, you can set the direct-io
parameter to 1 (default: 0). This will degrade performance, but is useful if
applications do their own caching.
如果您希望 virtiofs 支持 O_DIRECT 标志,可以将 direct-io 参数设置为 1(默认值:0)。这会降低性能,但如果应用程序自行进行缓存,则非常有用。
qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1 qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1 qm set <vmid> -virtiofs2 <dirid>,expose-acl=1
To temporarily mount virtiofs in a guest VM with the Linux kernel virtiofs
driver, run the following command inside the guest:
要在安装了 Linux 内核 virtiofs 驱动的客户机虚拟机中临时挂载 virtiofs,请在客户机内运行以下命令:
mount -t virtiofs <dirid> <mount point>
To have a persistent virtiofs mount, you can create an fstab entry:
要实现持久的 virtiofs 挂载,可以创建一个 fstab 条目:
<dirid> <mount point> virtiofs rw,relatime 0 0
The dirid associated with the path on the current node is also used as the mount
tag (name used to mount the device on the guest).
当前节点上与路径关联的 dirid 也用作挂载标签(用于在客户机上挂载设备的名称)。
For more information on available virtiofsd parameters, see the
GitLab virtiofsd project page.
有关可用 virtiofsd 参数的更多信息,请参阅 GitLab virtiofsd 项目页面。
10.2.17. Device Boot Order
10.2.17. 设备启动顺序
QEMU can tell the guest which devices it should boot from, and in which order.
This can be specified in the config via the boot property, for example:
QEMU 可以告诉客户机应该从哪些设备启动,以及启动顺序。可以通过配置中的 boot 属性来指定,例如:
boot: order=scsi0;net0;hostpci0
This way, the guest would first attempt to boot from the disk scsi0, if that
fails, it would go on to attempt network boot from net0, and in case that
fails too, finally attempt to boot from a passed through PCIe device (seen as
disk in case of NVMe, otherwise tries to launch into an option ROM).
这样,客户机会首先尝试从磁盘 scsi0 启动,如果失败,则尝试从网络设备 net0 启动,如果仍然失败,最后尝试从直通的 PCIe 设备启动(对于 NVMe 设备视为磁盘,否则尝试启动其选项 ROM)。
On the GUI you can use a drag-and-drop editor to specify the boot order, and use
the checkbox to enable or disable certain devices for booting altogether.
在图形界面中,可以使用拖放编辑器来指定启动顺序,并通过复选框启用或禁用某些设备的启动。
|
|
If your guest uses multiple disks to boot the OS or load the bootloader,
all of them must be marked as bootable (that is, they must have the checkbox
enabled or appear in the list in the config) for the guest to be able to boot.
This is because recent SeaBIOS and OVMF versions only initialize disks if they
are marked bootable. 如果您的客户机使用多个磁盘来启动操作系统或加载引导加载程序,则所有这些磁盘都必须被标记为可引导(即必须勾选复选框或在配置列表中出现),客户机才能启动。这是因为较新的 SeaBIOS 和 OVMF 版本仅初始化被标记为可引导的磁盘。 |
In any case, even devices not appearing in the list or having the checkmark
disabled will still be available to the guest, once it’s operating system has
booted and initialized them. The bootable flag only affects the guest BIOS and
bootloader.
无论如何,即使设备未出现在列表中或未勾选复选框,一旦客户机的操作系统启动并初始化它们,这些设备仍然可供客户机使用。可引导标志仅影响客户机的 BIOS 和引导加载程序。
10.2.18. Automatic Start and Shutdown of Virtual Machines
10.2.18. 虚拟机的自动启动和关闭
After creating your VMs, you probably want them to start automatically
when the host system boots. For this you need to select the option Start at
boot from the Options Tab of your VM in the web interface, or set it with
the following command:
创建虚拟机后,您可能希望它们在主机系统启动时自动启动。为此,您需要在 Web 界面的虚拟机“选项”标签中选择“开机启动”选项,或者使用以下命令进行设置:
# qm set <vmid> -onboot 1
In some case you want to be able to fine tune the boot order of your
VMs, for instance if one of your VM is providing firewalling or DHCP
to other guest systems. For this you can use the following
parameters:
在某些情况下,您可能希望能够微调虚拟机的启动顺序,例如当您的某个虚拟机为其他来宾系统提供防火墙或 DHCP 服务时。为此,您可以使用以下参数:
-
Start/Shutdown order: Defines the start order priority. For example, set it to 1 if you want the VM to be the first to be started. (We use the reverse startup order for shutdown, so a machine with a start order of 1 would be the last to be shut down). If multiple VMs have the same order defined on a host, they will additionally be ordered by VMID in ascending order.
启动/关闭顺序:定义启动顺序的优先级。例如,如果您希望某个虚拟机最先启动,可以将其设置为 1。(关闭时采用相反的启动顺序,因此启动顺序为 1 的虚拟机将是最后关闭的)。如果多个虚拟机在同一主机上定义了相同的顺序,则它们将按 VMID 升序排列。 -
Startup delay: Defines the interval between this VM start and subsequent VMs starts. For example, set it to 240 if you want to wait 240 seconds before starting other VMs.
启动延迟:定义此虚拟机启动与后续虚拟机启动之间的间隔。例如,如果您希望在启动其他虚拟机之前等待 240 秒,可以将其设置为 240。 -
Shutdown timeout: Defines the duration in seconds Proxmox VE should wait for the VM to be offline after issuing a shutdown command. By default this value is set to 180, which means that Proxmox VE will issue a shutdown request and wait 180 seconds for the machine to be offline. If the machine is still online after the timeout it will be stopped forcefully.
关机超时:定义 Proxmox VE 在发出关机命令后等待虚拟机离线的时间,单位为秒。默认值为 180,这意味着 Proxmox VE 会发出关机请求并等待 180 秒让机器离线。如果超时后机器仍然在线,将被强制停止。
|
|
VMs managed by the HA stack do not follow the start on boot and
boot order options currently. Those VMs will be skipped by the startup and
shutdown algorithm as the HA manager itself ensures that VMs get started and
stopped. 由 HA 堆栈管理的虚拟机当前不遵循开机启动和启动顺序选项。这些虚拟机将被启动和关机算法跳过,因为 HA 管理器本身确保虚拟机的启动和停止。 |
Please note that machines without a Start/Shutdown order parameter will always
start after those where the parameter is set. Further, this parameter can only
be enforced between virtual machines running on the same host, not
cluster-wide.
请注意,没有设置启动/关机顺序参数的机器总是在设置了该参数的机器之后启动。此外,该参数只能在运行于同一主机上的虚拟机之间强制执行,不能跨集群范围应用。
If you require a delay between the host boot and the booting of the first VM,
see the section on Proxmox VE Node Management.
如果您需要在主机启动和第一台虚拟机启动之间设置延迟,请参阅 Proxmox VE 节点管理部分。
10.2.19. QEMU Guest Agent
10.2.19. QEMU 客户机代理
The QEMU Guest Agent is a service which runs inside the VM, providing a
communication channel between the host and the guest. It is used to exchange
information and allows the host to issue commands to the guest.
QEMU 客户机代理是在虚拟机内部运行的服务,提供主机与客户机之间的通信通道。它用于交换信息,并允许主机向客户机发出命令。
For example, the IP addresses in the VM summary panel are fetched via the guest
agent.
例如,虚拟机摘要面板中的 IP 地址就是通过客户机代理获取的。
Or when starting a backup, the guest is told via the guest agent to sync
outstanding writes via the fs-freeze and fs-thaw commands.
或者在启动备份时,客户机会通过客户机代理接收到使用 fs-freeze 和 fs-thaw 命令同步未完成写入的指令。
For the guest agent to work properly the following steps must be taken:
为了使客户机代理正常工作,必须采取以下步骤:
-
install the agent in the guest and make sure it is running
在客户机中安装代理并确保其正在运行 -
enable the communication via the agent in Proxmox VE
在 Proxmox VE 中启用通过代理的通信
Install Guest Agent 安装客户机代理
For most Linux distributions, the guest agent is available. The package is
usually named qemu-guest-agent.
对于大多数 Linux 发行版,来宾代理是可用的。该包通常命名为 qemu-guest-agent。
For Windows, it can be installed from the
Fedora
VirtIO driver ISO.
对于 Windows,可以从 Fedora VirtIO 驱动程序 ISO 中安装。
Enable Guest Agent Communication
启用来宾代理通信
Communication from Proxmox VE with the guest agent can be enabled in the VM’s
Options panel. A fresh start of the VM is necessary for the changes to take
effect.
可以在虚拟机的“选项”面板中启用 Proxmox VE 与来宾代理的通信。需要重新启动虚拟机以使更改生效。
Automatic TRIM Using QGA
使用 QGA 自动 TRIM
It is possible to enable the Run guest-trim option. With this enabled,
Proxmox VE will issue a trim command to the guest after the following
operations that have the potential to write out zeros to the storage:
可以启用“运行 guest-trim”选项。启用后,Proxmox VE 会在以下可能向存储写入零的操作之后向客户机发出 trim 命令:
-
moving a disk to another storage
将磁盘移动到另一个存储 -
live migrating a VM to another node with local storage
将虚拟机实时迁移到具有本地存储的另一个节点
On a thin provisioned storage, this can help to free up unused space.
在精简配置的存储上,这有助于释放未使用的空间。
|
|
There is a caveat with ext4 on Linux, because it uses an in-memory
optimization to avoid issuing duplicate TRIM requests. Since the guest doesn’t
know about the change in the underlying storage, only the first guest-trim will
run as expected. Subsequent ones, until the next reboot, will only consider
parts of the filesystem that changed since then. Linux 上的 ext4 有一个注意事项,因为它使用内存中的优化来避免发出重复的 TRIM 请求。由于客户机不知道底层存储的变化,只有第一次客户机 TRIM 会按预期运行。随后的 TRIM 操作,直到下一次重启,只会考虑自那时以来文件系统中发生变化的部分。 |
Filesystem Freeze & Thaw on Backup
备份时的文件系统冻结与解冻
By default, guest filesystems are synced via the fs-freeze QEMU Guest Agent
Command when a backup is performed, to provide consistency.
默认情况下,备份执行时通过 fs-freeze QEMU 客户机代理命令同步客户机文件系统,以确保一致性。
On Windows guests, some applications might handle consistent backups themselves
by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
fs-freeze then might interfere with that. For example, it has been observed
that calling fs-freeze with some SQL Servers triggers VSS to call the SQL
Writer VSS module in a mode that breaks the SQL Server backup chain for
differential backups.
在 Windows 客户机上,一些应用程序可能通过挂钩 Windows VSS(卷影复制服务)层来自行处理一致性备份,而 fs-freeze 可能会干扰这一过程。例如,有观察表明,对某些 SQL Server 调用 fs-freeze 会触发 VSS 以一种破坏 SQL Server 差异备份备份链的模式调用 SQL Writer VSS 模块。
There are two options on how to handle such a situation.
有两种方法可以处理这种情况。
-
Configure the QEMU Guest Agent to use a different VSS variant that does not interfere with other VSS users. The Proxmox VE wiki has more details.
配置 QEMU 客户端代理使用不同的 VSS 变体,该变体不会干扰其他 VSS 用户。Proxmox VE 维基有更多详细信息。 -
Alternatively, you can configure Proxmox VE to not issue a freeze-and-thaw cycle on backup by setting the freeze-fs-on-backup QGA option to 0. This can also be done via the GUI with the Freeze/thaw guest filesystems on backup for consistency option.
或者,您可以通过将 freeze-fs-on-backup QGA 选项设置为 0,配置 Proxmox VE 在备份时不发出冻结和解冻周期。这也可以通过 GUI 中的“备份时冻结/解冻客户机文件系统以保持一致性”选项来完成。Disabling this option can potentially lead to backups with inconsistent filesystems. Therefore, adapting the QEMU Guest Agent configuration in the guest is the preferred option.
禁用此选项可能导致备份的文件系统不一致。因此,建议优先调整虚拟机内的 QEMU 客户端代理配置。
Troubleshooting 故障排除
虚拟机无法关闭
Make sure the guest agent is installed and running.
确保客户代理已安装并正在运行。
Once the guest agent is enabled, Proxmox VE will send power commands like
shutdown via the guest agent. If the guest agent is not running, commands
cannot get executed properly and the shutdown command will run into a timeout.
一旦启用来宾代理,Proxmox VE 将通过来宾代理发送关机等电源命令。如果来宾代理未运行,命令将无法正确执行,关机命令将超时。
10.2.20. SPICE Enhancements
10.2.20. SPICE 增强功能
SPICE Enhancements are optional features that can improve the remote viewer
experience.
SPICE 增强功能是可选特性,可以改善远程查看器的使用体验。
To enable them via the GUI go to the Options panel of the virtual machine. Run
the following command to enable them via the CLI:
要通过图形界面启用它们,请进入虚拟机的“选项”面板。通过命令行启用它们,请运行以下命令:
qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
|
|
To use these features the Display of the virtual machine
must be set to SPICE (qxl). 要使用这些功能,虚拟机的显示必须设置为 SPICE(qxl)。 |
Folder Sharing 文件夹共享
Share a local folder with the guest. The spice-webdavd daemon needs to be
installed in the guest. It makes the shared folder available through a local
WebDAV server located at http://localhost:9843.
与客户机共享本地文件夹。客户机中需要安装 spice-webdavd 守护进程。它通过位于 http://localhost:9843 的本地 WebDAV 服务器提供共享文件夹。
For Windows guests the installer for the Spice WebDAV daemon can be downloaded
from the
official SPICE website.
对于 Windows 客户机,可以从官方 SPICE 网站下载 Spice WebDAV 守护进程的安装程序。
Most Linux distributions have a package called spice-webdavd that can be
installed.
大多数 Linux 发行版都有一个名为 spice-webdavd 的包可以安装。
To share a folder in Virt-Viewer (Remote Viewer) go to File → Preferences.
Select the folder to share and then enable the checkbox.
要在 Virt-Viewer(远程查看器)中共享文件夹,请转到 文件 → 首选项。选择要共享的文件夹,然后勾选复选框。
|
|
Folder sharing currently only works in the Linux version of Virt-Viewer. 文件夹共享目前仅在 Linux 版本的 Virt-Viewer 中有效。 |
|
|
Experimental! Currently this feature does not work reliably. 实验性功能!目前此功能尚不稳定。 |
Video Streaming 视频流
Fast refreshing areas are encoded into a video stream. Two options exist:
快速刷新区域被编码成视频流。有两种选项:
-
all: Any fast refreshing area will be encoded into a video stream.
all:任何快速刷新区域都会被编码成视频流。 -
filter: Additional filters are used to decide if video streaming should be used (currently only small window surfaces are skipped).
filter:使用额外的过滤器来决定是否使用视频流(目前仅跳过小窗口表面)。
A general recommendation if video streaming should be enabled and which option
to choose from cannot be given. Your mileage may vary depending on the specific
circumstances.
如果是否应启用视频流以及选择哪个选项无法给出通用建议。具体情况可能因环境而异。
Troubleshooting 故障排除
共享文件夹未显示
Make sure the WebDAV service is enabled and running in the guest. On Windows it
is called Spice webdav proxy. In Linux the name is spice-webdavd but can be
different depending on the distribution.
确保 WebDAV 服务在客户机中已启用并正在运行。在 Windows 中,该服务称为 Spice webdav 代理。在 Linux 中,名称为 spice-webdavd,但可能因发行版不同而有所差异。
If the service is running, check the WebDAV server by opening
http://localhost:9843 in a browser in the guest.
如果服务正在运行,可以在客户机的浏览器中打开 http://localhost:9843 来检查 WebDAV 服务器。
It can help to restart the SPICE session.
重启 SPICE 会话可能会有所帮助。
10.3. Migration 10.3. 迁移
# qm migrate <vmid> <target>
There are generally two mechanisms for this
通常有两种机制
-
Online Migration (aka Live Migration)
在线迁移(又称实时迁移) -
Offline Migration 离线迁移
10.3.1. Online Migration
10.3.1. 在线迁移
If your VM is running and no locally bound resources are configured (such as
devices that are passed through), you can initiate a live migration with the --online
flag in the qm migration command evocation. The web interface defaults to
live migration when the VM is running.
如果您的虚拟机正在运行且未配置本地绑定资源(例如直通设备),您可以在调用 qm migration 命令时使用 --online 标志来启动在线迁移。网页界面在虚拟机运行时默认使用在线迁移。
How it works 工作原理
Online migration first starts a new QEMU process on the target host with the
incoming flag, which performs only basic initialization with the guest vCPUs
still paused and then waits for the guest memory and device state data streams
of the source Virtual Machine.
All other resources, such as disks, are either shared or got already sent
before runtime state migration of the VMs begins; so only the memory content
and device state remain to be transferred.
在线迁移首先在目标主机上启动一个带有 incoming 标志的新 QEMU 进程,该进程仅执行基本初始化,虚拟机的 vCPU 仍处于暂停状态,然后等待源虚拟机的内存和设备状态数据流。所有其他资源,如磁盘,要么是共享的,要么已在虚拟机运行时状态迁移开始之前发送完毕;因此,只有内存内容和设备状态需要被传输。
Once this connection is established, the source begins asynchronously sending
the memory content to the target. If the guest memory on the source changes,
those sections are marked dirty and another pass is made to send the guest
memory data.
This loop is repeated until the data difference between running source VM
and incoming target VM is small enough to be sent in a few milliseconds,
because then the source VM can be paused completely, without a user or program
noticing the pause, so that the remaining data can be sent to the target, and
then unpause the targets VM’s CPU to make it the new running VM in well under a
second.
一旦建立了此连接,源端开始异步地将内存内容发送到目标端。如果源端的客户机内存发生变化,这些部分会被标记为脏页,并进行另一轮传输以发送客户机内存数据。这个循环会重复进行,直到运行中的源虚拟机与接收中的目标虚拟机之间的数据差异足够小,可以在几毫秒内发送完毕。此时,源虚拟机可以完全暂停,用户或程序不会察觉暂停,从而将剩余数据发送到目标端,然后恢复目标虚拟机的 CPU,使其在不到一秒的时间内成为新的运行虚拟机。
Requirements 要求
For Live Migration to work, there are some things required:
为了使实时迁移生效,需要满足以下条件:
-
The VM has no local resources that cannot be migrated. For example, PCI or USB devices that are passed through currently block live-migration. Local Disks, on the other hand, can be migrated by sending them to the target just fine.
虚拟机没有无法迁移的本地资源。例如,当前通过直通的 PCI 或 USB 设备会阻止实时迁移。另一方面,本地磁盘可以通过发送到目标端来顺利迁移。 -
The hosts are located in the same Proxmox VE cluster.
主机位于同一个 Proxmox VE 集群中。 -
The hosts have a working (and reliable) network connection between them.
主机之间有一个正常(且可靠)的网络连接。 -
The target host must have the same, or higher versions of the Proxmox VE packages. Although it can sometimes work the other way around, this cannot be guaranteed.
目标主机必须安装相同或更高版本的 Proxmox VE 软件包。虽然有时反过来也能工作,但无法保证。 -
The hosts have CPUs from the same vendor with similar capabilities. Different vendor might work depending on the actual models and VMs CPU type configured, but it cannot be guaranteed - so please test before deploying such a setup in production.
主机的 CPU 来自同一供应商且具有相似的性能。不同供应商的 CPU 可能可行,具体取决于实际型号和配置的虚拟机 CPU 类型,但无法保证——因此请在生产环境部署此类设置前进行测试。
10.3.2. Offline Migration
10.3.2. 离线迁移
If you have local resources, you can still migrate your VMs offline as long as
all disk are on storage defined on both hosts.
Migration then copies the disks to the target host over the network, as with
online migration. Note that any hardware passthrough configuration may need to
be adapted to the device location on the target host.
如果您有本地资源,只要所有磁盘都存放在两个主机上都定义的存储上,您仍然可以离线迁移虚拟机。迁移时会像在线迁移一样,通过网络将磁盘复制到目标主机。请注意,任何硬件直通配置可能需要根据目标主机上的设备位置进行调整。
10.4. Copies and Clones
10.4. 复制与克隆
VM installation is usually done using an installation media (CD-ROM)
from the operating system vendor. Depending on the OS, this can be a
time consuming task one might want to avoid.
虚拟机安装通常使用操作系统供应商提供的安装介质(CD-ROM)进行。根据操作系统的不同,这可能是一个耗时的任务,可能希望避免。
An easy way to deploy many VMs of the same type is to copy an existing
VM. We use the term clone for such copies, and distinguish between
linked and full clones.
一种部署多个相同类型虚拟机的简便方法是复制现有虚拟机。我们将这种复制称为克隆,并区分链接克隆和完整克隆。
- Full Clone 完整克隆
-
The result of such copy is an independent VM. The new VM does not share any storage resources with the original.
这种复制的结果是一个独立的虚拟机。新虚拟机不与原虚拟机共享任何存储资源。It is possible to select a Target Storage, so one can use this to migrate a VM to a totally different storage. You can also change the disk image Format if the storage driver supports several formats.
可以选择目标存储,因此可以利用此功能将虚拟机迁移到完全不同的存储。若存储驱动支持多种格式,还可以更改磁盘镜像格式。A full clone needs to read and copy all VM image data. This is usually much slower than creating a linked clone.
完整克隆需要读取并复制所有虚拟机镜像数据。这通常比创建链接克隆慢得多。Some storage types allows to copy a specific Snapshot, which defaults to the current VM data. This also means that the final copy never includes any additional snapshots from the original VM.
某些存储类型允许复制特定的快照,默认是当前的虚拟机数据。这也意味着最终的复制永远不会包含原始虚拟机的任何额外快照。 - Linked Clone 链接克隆
-
Modern storage drivers support a way to generate fast linked clones. Such a clone is a writable copy whose initial contents are the same as the original data. Creating a linked clone is nearly instantaneous, and initially consumes no additional space.
现代存储驱动支持生成快速链接克隆的方法。这样的克隆是一个可写的副本,其初始内容与原始数据相同。创建链接克隆几乎是瞬时完成的,且最初不占用额外空间。They are called linked because the new image still refers to the original. Unmodified data blocks are read from the original image, but modification are written (and afterwards read) from a new location. This technique is called Copy-on-write.
它们被称为“链接”,因为新的镜像仍然引用原始镜像。未修改的数据块从原始镜像读取,但修改的数据块则写入(随后读取)到一个新位置。这种技术称为写时复制(Copy-on-write)。This requires that the original volume is read-only. With Proxmox VE one can convert any VM into a read-only Template). Such templates can later be used to create linked clones efficiently.
这要求原始卷是只读的。在 Proxmox VE 中,可以将任何虚拟机转换为只读模板。这样的模板以后可以用来高效地创建链接克隆。You cannot delete an original template while linked clones exist.
当存在链接克隆时,不能删除原始模板。It is not possible to change the Target storage for linked clones, because this is a storage internal feature.
无法更改链接克隆的目标存储,因为这是存储内部的功能。
The Target node option allows you to create the new VM on a
different node. The only restriction is that the VM is on shared
storage, and that storage is also available on the target node.
目标节点选项允许您在不同的节点上创建新的虚拟机。唯一的限制是虚拟机必须位于共享存储上,并且该存储也必须在目标节点上可用。
To avoid resource conflicts, all network interface MAC addresses get
randomized, and we generate a new UUID for the VM BIOS (smbios1)
setting.
为了避免资源冲突,所有网络接口的 MAC 地址都会被随机化,并且我们会为虚拟机 BIOS(smbios1)设置生成一个新的 UUID。
10.5. Virtual Machine Templates
10.5. 虚拟机模板
One can convert a VM into a Template. Such templates are read-only,
and you can use them to create linked clones.
可以将虚拟机转换为模板。此类模板是只读的,您可以使用它们来创建链接克隆。
|
|
It is not possible to start templates, because this would modify
the disk images. If you want to change the template, create a linked
clone and modify that. 无法启动模板,因为这会修改磁盘映像。如果您想更改模板,请创建一个链接克隆并修改它。 |
10.6. VM Generation ID
10.6. 虚拟机生成 ID
Proxmox VE supports Virtual Machine Generation ID (vmgenid) [47]
for virtual machines.
This can be used by the guest operating system to detect any event resulting
in a time shift event, for example, restoring a backup or a snapshot rollback.
Proxmox VE 支持虚拟机生成 ID(vmgenid)[47],供虚拟机使用。来宾操作系统可以利用该 ID 检测任何导致时间偏移的事件,例如恢复备份或快照回滚。
When creating new VMs, a vmgenid will be automatically generated and saved
in its configuration file.
创建新虚拟机时,vmgenid 会自动生成并保存在其配置文件中。
To create and add a vmgenid to an already existing VM one can pass the
special value ‘1’ to let Proxmox VE autogenerate one or manually set the UUID
[48] by using it as value, for
example:
要为已存在的虚拟机创建并添加 vmgenid,可以传递特殊值“1”让 Proxmox VE 自动生成一个,或者手动设置 UUID [48],例如:
# qm set VMID -vmgenid 1 # qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
|
|
The initial addition of a vmgenid device to an existing VM, may result
in the same effects as a change on snapshot rollback, backup restore, etc., has
as the VM can interpret this as generation change. 首次向已存在的虚拟机添加 vmgenid 设备,可能会产生与快照回滚、备份恢复等操作相同的效果,因为虚拟机可能将其解释为代数变化。 |
In the rare case the vmgenid mechanism is not wanted one can pass ‘0’ for
its value on VM creation, or retroactively delete the property in the
configuration with:
在极少数情况下,如果不需要 vmgenid 机制,可以在创建虚拟机时将其值设为“0”,或者事后通过配置删除该属性:
# qm set VMID -delete vmgenid
The most prominent use case for vmgenid are newer Microsoft Windows
operating systems, which use it to avoid problems in time sensitive or
replicate services (such as databases or domain controller
[49])
on snapshot rollback, backup restore or a whole VM clone operation.
vmgenid 最主要的使用场景是较新的 Microsoft Windows 操作系统,它们使用该机制来避免在快照回滚、备份恢复或整个虚拟机克隆操作中,时间敏感或复制服务(如数据库或域控制器 [49])出现问题。
10.7. Importing Virtual Machines
10.7. 导入虚拟机
Importing existing virtual machines from foreign hypervisors or other Proxmox VE
clusters can be achieved through various methods, the most common ones are:
可以通过多种方法从外部虚拟机管理程序或其他 Proxmox VE 集群导入现有虚拟机,最常见的方法有:
-
Using the native import wizard, which utilizes the import content type, such as provided by the ESXi special storage.
使用原生导入向导,该向导利用导入内容类型,例如由 ESXi 特殊存储提供的内容。 -
Performing a backup on the source and then restoring on the target. This method works best when migrating from another Proxmox VE instance.
在源端执行备份,然后在目标端恢复。此方法在从另一个 Proxmox VE 实例迁移时效果最佳。 -
using the OVF-specific import command of the qm command-line tool.
使用 qm 命令行工具的 OVF 专用导入命令。
If you import VMs to Proxmox VE from other hypervisors, it’s recommended to
familiarize yourself with the
concepts of Proxmox VE.
如果您从其他虚拟机管理程序导入虚拟机到 Proxmox VE,建议先熟悉 Proxmox VE 的相关概念。
10.7.1. Import Wizard 10.7.1. 导入向导
Proxmox VE provides an integrated VM importer using the storage plugin system for
native integration into the API and web-based user interface. You can use this
to import the VM as a whole, with most of its config mapped to Proxmox VE’s config
model and reduced downtime.
Proxmox VE 提供了一个集成的虚拟机导入器,使用存储插件系统实现与 API 和基于网页的用户界面的原生集成。您可以使用它整体导入虚拟机,其大部分配置会映射到 Proxmox VE 的配置模型,并减少停机时间。
|
|
The import wizard was added during the Proxmox VE 8.2 development cycle and is
in tech preview state. While it’s already promising and working stable, it’s
still under active development. 导入向导是在 Proxmox VE 8.2 开发周期中添加的,目前处于技术预览状态。虽然它已经表现出良好的前景并且运行稳定,但仍在积极开发中。 |
To use the import wizard you have to first set up a new storage for an import
source, you can do so on the web-interface under Datacenter → Storage → Add.
要使用导入向导,您必须先为导入源设置一个新的存储,可以在网页界面中的数据中心 → 存储 → 添加进行设置。
Then you can select the new storage in the resource tree and use the Virtual
Guests content tab to see all available guests that can be imported.
然后,您可以在资源树中选择新的存储,并使用虚拟机内容标签查看所有可导入的虚拟机。
Select one and use the Import button (or double-click) to open the import
wizard. You can modify a subset of the available options here and then start the
import. Please note that you can do more advanced modifications after the import
finished.
选择一个虚拟机,使用导入按钮(或双击)打开导入向导。您可以在这里修改部分可用选项,然后开始导入。请注意,导入完成后,您可以进行更高级的修改。
|
|
The ESXi import wizard has been tested with ESXi versions 6.5 through
8.0. Note that guests using vSAN storage cannot be directly imported directly;
their disks must first be moved to another storage. While it is possible to use
a vCenter as the import source, performance is dramatically degraded (5 to 10
times slower). ESXi 导入向导已在 ESXi 6.5 至 8.0 版本中进行了测试。请注意,使用 vSAN 存储的虚拟机无法直接导入;必须先将其磁盘移动到其他存储。虽然可以使用 vCenter 作为导入源,但性能会大幅下降(慢 5 到 10 倍)。 |
For a step-by-step guide and tips for how to adapt the virtual guest to the new
hyper-visor see our
migrate to Proxmox VE
wiki article.
有关逐步指南以及如何将虚拟机适配到新虚拟机监控程序的技巧,请参阅我们的迁移到 Proxmox VE 维基文章。
OVA/OVF Import OVA/OVF 导入
To import OVA/OVF files, you first need a File-based storage with the import
content type. On this storage, there will be an import folder where you can
put OVA files or OVF files with the corresponding images in a flat structure.
Alternatively you can use the web UI to upload or download OVA files directly.
You can then use the web UI to select those and use the import wizard to import
the guests.
要导入 OVA/OVF 文件,首先需要一个具有导入内容类型的基于文件的存储。在该存储上,会有一个导入文件夹,您可以将 OVA 文件或带有相应镜像的 OVF 文件以扁平结构放入其中。或者,您也可以使用网页界面直接上传或下载 OVA 文件。然后,您可以使用网页界面选择这些文件,并使用导入向导导入虚拟机。
For OVA files, there is additional space needed to temporarily extract the
image. This needs a file-based storage that has the images content type
configured. By default the source storage is selected for this, but you can
specify a Import Working Storage on which the images will be extracted before
importing to the actual target storage.
对于 OVA 文件,需要额外的空间来临时解压映像。这需要一个配置了映像内容类型的基于文件的存储。默认情况下,选择源存储作为此用途,但您可以指定一个导入工作存储,在将映像导入实际目标存储之前先在该存储上解压映像。
|
|
Since OVA/OVF file structure and content are not always well maintained
or defined, it might be necessary to adapt some guest settings manually. For
example the SCSI controller type is almost never defined in OVA/OVF files, but
the default is unbootable with OVMF (UEFI), so you should select Virtio SCSI
or VMware PVSCSI for these cases. 由于 OVA/OVF 文件的结构和内容并不总是维护良好或定义明确,可能需要手动调整一些客户机设置。例如,SCSI 控制器类型几乎从未在 OVA/OVF 文件中定义,但默认情况下使用 OVMF(UEFI)时无法启动,因此在这些情况下应选择 Virtio SCSI 或 VMware PVSCSI。 |
10.7.2. Import OVF/OVA Through CLI
10.7.2. 通过 CLI 导入 OVF/OVA
A VM export from a foreign hypervisor takes usually the form of one or more disk
images, with a configuration file describing the settings of the VM (RAM,
number of cores).
来自其他虚拟机管理程序的虚拟机导出通常以一个或多个磁盘映像的形式存在,并附带一个描述虚拟机设置(内存、核心数)的配置文件。
The disk images can be in the vmdk format, if the disks come from
VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
The most popular configuration format for VM exports is the OVF standard, but in
practice interoperation is limited because many settings are not implemented in
the standard itself, and hypervisors export the supplementary information
in non-standard extensions.
磁盘镜像可以是 vmdk 格式,如果磁盘来自 VMware 或 VirtualBox;如果磁盘来自 KVM 虚拟机管理程序,则为 qcow2 格式。虚拟机导出的最流行配置格式是 OVF 标准,但实际上互操作性有限,因为许多设置并未在标准本身中实现,且虚拟机管理程序以非标准扩展方式导出补充信息。
Besides the problem of format, importing disk images from other hypervisors
may fail if the emulated hardware changes too much from one hypervisor to
another. Windows VMs are particularly concerned by this, as the OS is very
picky about any changes of hardware. This problem may be solved by
installing the MergeIDE.zip utility available from the Internet before exporting
and choosing a hard disk type of IDE before booting the imported Windows VM.
除了格式问题外,如果模拟硬件在不同虚拟机管理程序之间变化过大,从其他虚拟机管理程序导入磁盘镜像可能会失败。Windows 虚拟机尤其受到影响,因为操作系统对硬件的任何变化都非常挑剔。这个问题可以通过在导出前安装网络上可获得的 MergeIDE.zip 工具,并在启动导入的 Windows 虚拟机之前选择 IDE 类型的硬盘来解决。
Finally there is the question of paravirtualized drivers, which improve the
speed of the emulated system and are specific to the hypervisor.
GNU/Linux and other free Unix OSes have all the necessary drivers installed by
default and you can switch to the paravirtualized drivers right after importing
the VM. For Windows VMs, you need to install the Windows paravirtualized
drivers by yourself.
最后是关于半虚拟化驱动程序的问题,这些驱动程序可以提高模拟系统的速度,并且是特定于虚拟机管理程序的。GNU/Linux 和其他自由 Unix 操作系统默认安装了所有必要的驱动程序,您可以在导入虚拟机后立即切换到半虚拟化驱动程序。对于 Windows 虚拟机,您需要自行安装 Windows 半虚拟化驱动程序。
GNU/Linux and other free Unix can usually be imported without hassle. Note
that we cannot guarantee a successful import/export of Windows VMs in all
cases due to the problems above.
GNU/Linux 和其他自由 Unix 通常可以轻松导入。请注意,由于上述问题,我们不能保证在所有情况下都能成功导入/导出 Windows 虚拟机。
Step-by-step example of a Windows OVF import
Windows OVF 导入的逐步示例
Microsoft provides
Virtual Machines downloads
to get started with Windows development.We are going to use one of these
to demonstrate the OVF import feature.
Microsoft 提供了虚拟机下载,以便开始 Windows 开发。我们将使用其中一个来演示 OVF 导入功能。
Download the Virtual Machine zip
下载虚拟机压缩包
After getting informed about the user agreement, choose the Windows 10
Enterprise (Evaluation - Build) for the VMware platform, and download the zip.
在了解用户协议后,选择适用于 VMware 平台的 Windows 10 Enterprise(评估版 - 构建),并下载压缩包。
Extract the disk image from the zip
从压缩包中提取磁盘映像
Using the unzip utility or any archiver of your choice, unpack the zip,
and copy via ssh/scp the ovf and vmdk files to your Proxmox VE host.
使用解压工具或您选择的任何压缩软件,解压缩包,并通过 ssh/scp 将 ovf 和 vmdk 文件复制到您的 Proxmox VE 主机。
Import the Virtual Machine
导入虚拟机
This will create a new virtual machine, using cores, memory and
VM name as read from the OVF manifest, and import the disks to the local-lvm
storage. You have to configure the network manually.
这将创建一个新的虚拟机,使用从 OVF 清单中读取的核心数、内存和虚拟机名称,并将磁盘导入到 local-lvm 存储中。您需要手动配置网络。
# qm importovf 999 WinDev1709Eval.ovf local-lvm
The VM is ready to be started.
虚拟机已准备好启动。
Adding an external disk image to a Virtual Machine
向虚拟机添加外部磁盘映像
You can also add an existing disk image to a VM, either coming from a
foreign hypervisor, or one that you created yourself.
您还可以将现有的磁盘镜像添加到虚拟机中,无论是来自其他虚拟机管理程序的,还是您自己创建的。
Suppose you created a Debian/Ubuntu disk image with the vmdebootstrap tool:
假设您使用 vmdebootstrap 工具创建了一个 Debian/Ubuntu 磁盘镜像:
vmdebootstrap --verbose \ --size 10GiB --serial-console \ --grub --no-extlinux \ --package openssh-server \ --package avahi-daemon \ --package qemu-guest-agent \ --hostname vm600 --enable-dhcp \ --customize=./copy_pub_ssh.sh \ --sparse --image vm600.raw
You can now create a new target VM, importing the image to the storage pvedir
and attaching it to the VM’s SCSI controller:
现在,您可以创建一个新的目标虚拟机,将镜像导入到存储 pvedir 中,并将其连接到虚拟机的 SCSI 控制器:
# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \ --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \ --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
The VM is ready to be started.
虚拟机已准备好启动。
10.8. Cloud-Init Support
10.8. Cloud-Init 支持
Cloud-Init is the de facto
multi-distribution package that handles early initialization of a
virtual machine instance. Using Cloud-Init, configuration of network
devices and ssh keys on the hypervisor side is possible. When the VM
starts for the first time, the Cloud-Init software inside the VM will
apply those settings.
Cloud-Init 是事实上的多发行版包,用于处理虚拟机实例的早期初始化。通过 Cloud-Init,可以在管理程序端配置网络设备和 ssh 密钥。当虚拟机首次启动时,虚拟机内的 Cloud-Init 软件将应用这些设置。
Many Linux distributions provide ready-to-use Cloud-Init images, mostly
designed for OpenStack. These images will also work with Proxmox VE. While
it may seem convenient to get such ready-to-use images, we usually
recommended to prepare the images by yourself. The advantage is that you
will know exactly what you have installed, and this helps you later to
easily customize the image for your needs.
许多 Linux 发行版提供了即用型的 Cloud-Init 镜像,主要针对 OpenStack 设计。这些镜像同样适用于 Proxmox VE。虽然使用此类即用型镜像看起来很方便,但我们通常建议自行准备镜像。这样做的好处是您能确切知道安装了什么,这有助于您以后根据需求轻松定制镜像。
Once you have created such a Cloud-Init image we recommend to convert it
into a VM template. From a VM template you can quickly create linked
clones, so this is a fast method to roll out new VM instances. You just
need to configure the network (and maybe the ssh keys) before you start
the new VM.
创建好 Cloud-Init 镜像后,我们建议将其转换为虚拟机模板。通过虚拟机模板,您可以快速创建链接克隆,这是一种快速部署新虚拟机实例的方法。启动新虚拟机前,您只需配置网络(可能还包括 ssh 密钥)。
We recommend using SSH key-based authentication to login to the VMs
provisioned by Cloud-Init. It is also possible to set a password, but
this is not as safe as using SSH key-based authentication because Proxmox VE
needs to store an encrypted version of that password inside the
Cloud-Init data.
我们建议使用基于 SSH 密钥的认证来登录由 Cloud-Init 配置的虚拟机。也可以设置密码,但这不如使用基于 SSH 密钥的认证安全,因为 Proxmox VE 需要在 Cloud-Init 数据中存储该密码的加密版本。
Proxmox VE generates an ISO image to pass the Cloud-Init data to the VM. For
that purpose, all Cloud-Init VMs need to have an assigned CD-ROM drive.
Usually, a serial console should be added and used as a display. Many Cloud-Init
images rely on this, it is a requirement for OpenStack. However, other images
might have problems with this configuration. Switch back to the default display
configuration if using a serial console doesn’t work.
Proxmox VE 会生成一个 ISO 镜像,将 Cloud-Init 数据传递给虚拟机。为此,所有 Cloud-Init 虚拟机都需要分配一个光驱。通常,应添加串行控制台并将其用作显示设备。许多 Cloud-Init 镜像依赖此功能,这是 OpenStack 的要求。然而,其他镜像可能会在此配置下出现问题。如果使用串行控制台无效,请切换回默认显示配置。
10.8.1. Preparing Cloud-Init Templates
10.8.1. 准备 Cloud-Init 模板
The first step is to prepare your VM. Basically you can use any VM.
Simply install the Cloud-Init packages inside the VM that you want to
prepare. On Debian/Ubuntu based systems this is as simple as:
第一步是准备您的虚拟机。基本上,您可以使用任何虚拟机。只需在您想要准备的虚拟机内安装 Cloud-Init 软件包即可。在基于 Debian/Ubuntu 的系统上,这非常简单:
apt-get install cloud-init
|
|
This command is not intended to be executed on the Proxmox VE host, but
only inside the VM. 此命令不应在 Proxmox VE 主机上执行,只能在虚拟机内部执行。 |
Already many distributions provide ready-to-use Cloud-Init images (provided
as .qcow2 files), so alternatively you can simply download and
import such images. For the following example, we will use the cloud
image provided by Ubuntu at https://cloud-images.ubuntu.com.
许多发行版已经提供了可直接使用的 Cloud-Init 镜像(以.qcow2 文件形式提供),因此你也可以直接下载并导入这些镜像。以下示例中,我们将使用 Ubuntu 在 https://cloud-images.ubuntu.com 提供的云镜像。
# download the image wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img # create a new VM with VirtIO SCSI controller qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci # import the downloaded disk to the local-lvm storage, attaching it as a SCSI drive qm set 9000 --scsi0 local-lvm:0,import-from=/path/to/bionic-server-cloudimg-amd64.img
|
|
Ubuntu Cloud-Init images require the virtio-scsi-pci
controller type for SCSI drives. Ubuntu Cloud-Init 镜像要求 SCSI 驱动使用 virtio-scsi-pci 控制器类型。 |
添加 Cloud-Init 光盘驱动器
The next step is to configure a CD-ROM drive, which will be used to pass
the Cloud-Init data to the VM.
下一步是配置一个光驱,用于将 Cloud-Init 数据传递给虚拟机。
qm set 9000 --ide2 local-lvm:cloudinit
To be able to boot directly from the Cloud-Init image, set the boot parameter
to order=scsi0 to restrict BIOS to boot from this disk only. This will speed
up booting, because VM BIOS skips the testing for a bootable CD-ROM.
为了能够直接从 Cloud-Init 镜像启动,将启动参数设置为 order=scsi0,以限制 BIOS 仅从此磁盘启动。这将加快启动速度,因为虚拟机 BIOS 会跳过对可启动光驱的检测。
qm set 9000 --boot order=scsi0
For many Cloud-Init images, it is required to configure a serial console and use
it as a display. If the configuration doesn’t work for a given image however,
switch back to the default display instead.
对于许多 Cloud-Init 镜像,需要配置串行控制台并将其用作显示。如果该配置对某个镜像不起作用,则切换回默认显示。
qm set 9000 --serial0 socket --vga serial0
In a last step, it is helpful to convert the VM into a template. From
this template you can then quickly create linked clones.
The deployment from VM templates is much faster than creating a full
clone (copy).
最后一步,将虚拟机转换为模板非常有用。然后可以从该模板快速创建链接克隆。使用虚拟机模板部署比创建完整克隆(复制)要快得多。
qm template 9000
10.8.2. Deploying Cloud-Init Templates
10.8.2. 部署 Cloud-Init 模板
qm clone 9000 123 --name ubuntu2
Then configure the SSH public key used for authentication, and configure
the IP setup:
然后配置用于认证的 SSH 公钥,并配置 IP 设置:
qm set 123 --sshkey ~/.ssh/id_rsa.pub qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1
You can also configure all the Cloud-Init options using a single command
only. We have simply split the above example to separate the
commands for reducing the line length. Also make sure to adopt the IP
setup for your specific environment.
您也可以仅使用一个命令配置所有 Cloud-Init 选项。我们只是将上述示例拆分为多个命令,以减少每行的长度。同时,请确保根据您的具体环境调整 IP 设置。
10.8.3. Custom Cloud-Init Configuration
10.8.3. 自定义 Cloud-Init 配置
The Cloud-Init integration also allows custom config files to be used instead
of the automatically generated configs. This is done via the cicustom
option on the command line:
Cloud-Init 集成还允许使用自定义配置文件来替代自动生成的配置。这可以通过命令行上的 cicustom 选项来实现:
qm set 9000 --cicustom "user=<volume>,network=<volume>,meta=<volume>"
The custom config files have to be on a storage that supports snippets and have
to be available on all nodes the VM is going to be migrated to. Otherwise the
VM won’t be able to start.
For example:
自定义配置文件必须存放在支持片段的存储上,并且必须在虚拟机将要迁移到的所有节点上都可用。否则虚拟机将无法启动。例如:
qm set 9000 --cicustom "user=local:snippets/userconfig.yaml"
There are three kinds of configs for Cloud-Init. The first one is the user
config as seen in the example above. The second is the network config and
the third the meta config. They can all be specified together or mixed
and matched however needed.
The automatically generated config will be used for any that don’t have a
custom config file specified.
Cloud-Init 有三种配置类型。第一种是用户配置,如上例所示。第二种是网络配置,第三种是元数据配置。它们可以全部一起指定,也可以根据需要混合搭配。对于没有指定自定义配置文件的配置,将使用自动生成的配置。
The generated config can be dumped to serve as a base for custom configs:
生成的配置可以导出,用作自定义配置的基础:
qm cloudinit dump 9000 user
The same command exists for network and meta.
相同的命令也适用于网络和元数据。
10.8.4. Cloud-Init on Windows
10.8.4. Windows 上的 Cloud-Init
There is a reimplementation of Cloud-Init available for Windows called
cloudbase-init. Not every feature of Cloud-Init is
available with Cloudbase-Init, and some features differ compared to Cloud-Init.
有一个适用于 Windows 的 Cloud-Init 重新实现版本,称为 cloudbase-init。Cloudbase-Init 并不支持 Cloud-Init 的所有功能,且某些功能与 Cloud-Init 有所不同。
Cloudbase-Init requires both ostype set to any Windows version and the
citype set to configdrive2, which is the default with any Windows
ostype.
Cloudbase-Init 需要将 ostype 设置为任意 Windows 版本,并将 citype 设置为 configdrive2,后者是任何 Windows ostype 的默认值。
There are no ready-made cloud images for Windows available for free. Using
Cloudbase-Init requires manually installing and configuring a Windows guest.
目前没有免费的现成 Windows 云镜像。使用 Cloudbase-Init 需要手动安装和配置 Windows 客机。
10.8.5. Preparing Cloudbase-Init Templates
10.8.5. 准备 Cloudbase-Init 模板
The first step is to install Windows in a VM. Download and install
Cloudbase-Init in the guest. It may be necessary to install the Beta version.
Don’t run Sysprep at the end of the installation. Instead configure
Cloudbase-Init first.
第一步是在虚拟机中安装 Windows。在客机中下载并安装 Cloudbase-Init。可能需要安装 Beta 版本。安装结束时不要运行 Sysprep,而是先配置 Cloudbase-Init。
A few common options to set would be:
一些常见的设置选项包括:
-
username: This sets the username of the administrator
username:设置管理员的用户名 -
groups: This allows one to add the user to the Administrators group
groups:允许将用户添加到管理员组 -
inject_user_password: Set this to true to allow setting the password in the VM config
inject_user_password:设置为 true 以允许在虚拟机配置中设置密码 -
first_logon_behaviour: Set this to no to not require a new password on login
first_logon_behaviour:设置为 no 表示登录时不要求更改密码 -
rename_admin_user: Set this to true to allow renaming the default Administrator user to the username specified with username
rename_admin_user:设置为 true 允许将默认的 Administrator 用户重命名为 username 指定的用户名 -
metadata_services: Set this to cloudbaseinit.metadata.services.configdrive.ConfigDriveService for Cloudbase-Init to first check this service. Otherwise it may take a few minutes for Cloudbase-Init to configure the system after boot.
metadata_services:设置为 cloudbaseinit.metadata.services.configdrive.ConfigDriveService,使 Cloudbase-Init 优先检查此服务。否则,Cloudbase-Init 可能需要几分钟时间才能在启动后配置系统。
Some plugins, for example the SetHostnamePlugin, require reboots and will do
so automatically. To disable automatic reboots by Cloudbase-Init, you can set
allow_reboot to false.
某些插件,例如 SetHostnamePlugin,需要重启并会自动执行。要禁用 Cloudbase-Init 的自动重启,可以将 allow_reboot 设置为 false。
A full set of configuration options can be found in the
official
cloudbase-init documentation.
完整的配置选项集可以在官方的 cloudbase-init 文档中找到。
It can make sense to make a snapshot after configuring in case some parts of
the config still need adjustments.
After configuring Cloudbase-Init you can start creating the template. Shutdown
the Windows guest, add a Cloud-Init disk and make it into a template.
在配置完成后进行快照是有意义的,以防配置的某些部分仍需调整。配置好 Cloudbase-Init 后,您可以开始创建模板。关闭 Windows 客户机,添加一个 Cloud-Init 磁盘,然后将其制作成模板。
qm set 9000 --ide2 local-lvm:cloudinit qm template 9000
Clone the template into a new VM:
将模板克隆为新的虚拟机:
qm clone 9000 123 --name windows123
Then set the password, network config and SSH key:
然后设置密码、网络配置和 SSH 密钥:
qm set 123 --cipassword <password> qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1 qm set 123 --sshkey ~/.ssh/id_rsa.pub
Make sure that the ostype is set to any Windows version before setting the
password. Otherwise the password will be encrypted and Cloudbase-Init will use
the encrypted password as plaintext password.
确保在设置密码之前,ostype 被设置为任意 Windows 版本。否则密码将被加密,Cloudbase-Init 会将加密后的密码当作明文密码使用。
When everything is set, start the cloned guest. On the first boot the login
won’t work and it will reboot automatically for the changed hostname.
After the reboot the new password should be set and login should work.
当所有设置完成后,启动克隆的虚拟机。首次启动时登录将无法成功,系统会自动重启以应用更改的主机名。重启后应设置新密码,登录应能正常进行。
10.8.6. Cloudbase-Init and Sysprep
10.8.6. Cloudbase-Init 和 Sysprep
Sysprep is a feature to reset the configuration of Windows and provide a new
system. This can be used in conjunction with Cloudbase-Init to create a clean
template.
Sysprep 是一个用于重置 Windows 配置并提供新系统的功能。它可以与 Cloudbase-Init 配合使用,以创建干净的模板。
When using Sysprep there are 2 configuration files that need to be adapted.
The first one is the normal configuration file, the second one is the one
ending in -unattend.conf.
使用 Sysprep 时,需要调整两个配置文件。第一个是普通配置文件,第二个是以 -unattend.conf 结尾的文件。
Cloudbase-Init runs in 2 steps, first the Sysprep step using the
-unattend.conf and then the regular step using the primary config file.
Cloudbase-Init 分两步运行,第一步是使用 -unattend.conf 的 Sysprep 步骤,第二步是使用主配置文件的常规步骤。
For Windows Server running Sysprep with the provided Unattend.xml file
should work out of the box. Normal Windows versions however require additional
steps:
对于运行 Sysprep 的 Windows Server,使用提供的 Unattend.xml 文件应能开箱即用。然而,普通 Windows 版本则需要额外的步骤:
-
Open a PowerShell instance
打开一个 PowerShell 窗口 -
Enable the Administrator user:
启用管理员用户:net user Administrator /active:yes`
-
Install Cloudbase-Init using the Administrator user
使用管理员用户安装 Cloudbase-Init -
Modify Unattend.xml to include the command to enable the Administrator user on the first boot after sysprepping:
修改 Unattend.xml,添加命令以在 sysprep 后的首次启动时启用管理员用户:<RunSynchronousCommand wcm:action="add"> <Path>net user administrator /active:yes</Path> <Order>1</Order> <Description>Enable Administrator User</Description> </RunSynchronousCommand>
Make sure the <Order> does not conflict with other synchronous commands. Modify <Order> of the Cloudbase-Init command to run after this one by increasing the number to a higher value: <Order>2</Order>
确保 <Order> 不与其他同步命令冲突。通过将 Cloudbase-Init 命令的 <Order> 修改为更高的值,使其在此命令之后运行:<Order>2</Order> -
(Windows 11 only) Remove the conflicting Microsoft.OneDriveSync package:
(仅限 Windows 11)删除冲突的 Microsoft.OneDriveSync 包:Get-AppxPackage -AllUsers Microsoft.OneDriveSync | Remove-AppxPackage -AllUsers
-
cd into the Cloudbase-Init config directory:
进入 Cloudbase-Init 配置目录:cd 'C:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf'
-
(optional) Create a snapshot of the VM before Sysprep in case of a misconfiguration
(可选)在 Sysprep 之前创建虚拟机快照,以防配置错误 -
Run Sysprep: 运行 Sysprep:
C:\Windows\System32\Sysprep\sysprep.exe /generalize /oobe /unattend:Unattend.xml
After following the above steps the VM should be in shut down state due to
the Sysprep. Now you can make it into a template, clone it and configure
it as needed.
按照上述步骤操作后,虚拟机应处于因 Sysprep 而关闭的状态。现在你可以将其制作成模板,克隆它并根据需要进行配置。
10.8.7. Cloud-Init specific Options
10.8.7. Cloud-Init 特定选项
- cicustom: [meta=<volume>] [,network=<volume>] [,user=<volume>] [,vendor=<volume>]
-
Specify custom files to replace the automatically generated ones at start.
指定自定义文件以替换启动时自动生成的文件。- meta=<volume>
-
Specify a custom file containing all meta data passed to the VM via" ." cloud-init. This is provider specific meaning configdrive2 and nocloud differ.
指定一个自定义文件,包含通过 cloud-init 传递给虚拟机的所有元数据。此项依赖于提供者,意味着 configdrive2 和 nocloud 不同。 - network=<volume>
-
To pass a custom file containing all network data to the VM via cloud-init.
通过 cloud-init 向虚拟机传递包含所有网络数据的自定义文件。 - user=<volume>
-
To pass a custom file containing all user data to the VM via cloud-init.
通过 cloud-init 向虚拟机传递包含所有用户数据的自定义文件。 - vendor=<volume>
-
To pass a custom file containing all vendor data to the VM via cloud-init.
通过 cloud-init 向虚拟机传递包含所有供应商数据的自定义文件。
- cipassword: <string>
-
Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords.
分配给用户的密码。通常不建议使用此方法。建议使用 ssh 密钥。另外请注意,较旧版本的 cloud-init 不支持哈希密码。 - citype: <configdrive2 | nocloud | opennebula>
-
Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows.
指定 cloud-init 的配置格式。默认值取决于配置的操作系统类型(ostype)。我们对 Linux 使用 nocloud 格式,对 Windows 使用 configdrive2 格式。 -
ciupgrade: <boolean> (default = 1)
ciupgrade: <布尔值>(默认 = 1) -
do an automatic package upgrade after the first boot.
首次启动后自动进行包升级。 - ciuser: <string> ciuser: <字符串>
-
User name to change ssh keys and password for instead of the image’s configured default user.
用于更改 SSH 密钥和密码的用户名,替代镜像中配置的默认用户。 -
ipconfig[n]: [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4Format/CIDR>] [,ip6=<IPv6Format/CIDR>]
ipconfig[n]:[gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4 格式/CIDR>] [,ip6=<IPv6 格式/CIDR>] -
Specify IP addresses and gateways for the corresponding interface.
为对应的接口指定 IP 地址和网关。IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.
IP 地址使用 CIDR 表示法,网关是可选的,但需要指定相同类型的 IP。The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit gateway should be provided. For IPv6 the special string auto can be used to use stateless autoconfiguration. This requires cloud-init 19.4 or newer.
IP 地址可以使用特殊字符串 dhcp 来启用 DHCP,此时不应提供显式网关。对于 IPv6,可以使用特殊字符串 auto 来启用无状态自动配置。这需要 cloud-init 19.4 或更高版本。If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using dhcp on IPv4.
如果启用了 cloud-init 且未指定 IPv4 或 IPv6 地址,则默认使用 IPv4 的 dhcp。- gw=<GatewayIPv4>
-
Default gateway for IPv4 traffic.
IPv4 流量的默认网关。Requires option(s): ip 需要选项:ip - gw6=<GatewayIPv6>
-
Default gateway for IPv6 traffic.
IPv6 流量的默认网关。Requires option(s): ip6 需要选项:ip6 -
ip=<IPv4Format/CIDR> (default = dhcp)
ip=<IPv4 格式/CIDR>(默认 = dhcp) -
IPv4 address in CIDR format.
CIDR 格式的 IPv4 地址。 -
ip6=<IPv6Format/CIDR> (default = dhcp)
ip6=<IPv6 格式/CIDR>(默认 = dhcp) -
IPv6 address in CIDR format.
CIDR 格式的 IPv6 地址。
- nameserver: <string> nameserver: <字符串>
-
Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
为容器设置 DNS 服务器 IP 地址。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 - searchdomain: <string>
-
Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
为容器设置 DNS 搜索域。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 - sshkeys: <string>
-
Setup public SSH keys (one key per line, OpenSSH format).
设置公共 SSH 密钥(每行一个密钥,OpenSSH 格式)。
10.9. PCI(e) Passthrough
10.9. PCI(e) 直通
PCI(e) passthrough is a mechanism to give a virtual machine control over
a PCI device from the host. This can have some advantages over using
virtualized hardware, for example lower latency, higher performance, or more
features (e.g., offloading).
PCI(e)直通是一种机制,可以让虚拟机控制主机上的 PCI 设备。这相比使用虚拟化硬件有一些优势,例如更低的延迟、更高的性能或更多的功能(例如卸载)。
But, if you pass through a device to a virtual machine, you cannot use that
device anymore on the host or in any other VM.
但是,如果你将设备直通给虚拟机,则该设备将无法再在主机或任何其他虚拟机中使用。
Note that, while PCI passthrough is available for i440fx and q35 machines, PCIe
passthrough is only available on q35 machines. This does not mean that
PCIe capable devices that are passed through as PCI devices will only run at
PCI speeds. Passing through devices as PCIe just sets a flag for the guest to
tell it that the device is a PCIe device instead of a "really fast legacy PCI
device". Some guest applications benefit from this.
请注意,虽然 PCI 直通适用于 i440fx 和 q35 机器,但 PCIe 直通仅适用于 q35 机器。这并不意味着作为 PCI 设备直通的 PCIe 设备只能以 PCI 速度运行。将设备作为 PCIe 直通只是为客户机设置一个标志,告诉它该设备是 PCIe 设备,而不是“真正快速的传统 PCI 设备”。某些客户机应用程序会从中受益。
10.9.1. General Requirements
10.9.1. 一般要求
Since passthrough is performed on real hardware, it needs to fulfill some
requirements. A brief overview of these requirements is given below, for more
information on specific devices, see
PCI Passthrough Examples.
由于直通是在真实硬件上执行的,因此需要满足一些要求。以下是这些要求的简要概述,关于特定设备的更多信息,请参见 PCI 直通示例。
Hardware 硬件
Your hardware needs to support IOMMU (I/O Memory Management
Unit) interrupt remapping, this includes the CPU and the motherboard.
您的硬件需要支持 IOMMU(输入输出内存管理单元)中断重映射,这包括 CPU 和主板。
Generally, Intel systems with VT-d and AMD systems with AMD-Vi support this.
But it is not guaranteed that everything will work out of the box, due
to bad hardware implementation and missing or low quality drivers.
通常,带有 VT-d 的 Intel 系统和带有 AMD-Vi 的 AMD 系统支持此功能。但由于硬件实现不佳以及驱动程序缺失或质量低劣,不能保证所有功能开箱即用。
Further, server grade hardware has often better support than consumer grade
hardware, but even then, many modern system can support this.
此外,服务器级硬件通常比消费级硬件有更好的支持,但即便如此,许多现代系统也能支持此功能。
Please refer to your hardware vendor to check if they support this feature
under Linux for your specific setup.
请咨询您的硬件供应商,确认他们是否支持您特定配置下的 Linux 系统使用此功能。
Determining PCI Card Address
确定 PCI 卡地址
The easiest way is to use the GUI to add a device of type "Host PCI" in the VM’s
hardware tab. Alternatively, you can use the command line.
最简单的方法是在虚拟机的硬件标签页中使用图形界面添加“主机 PCI”类型的设备。或者,您也可以使用命令行。
You can locate your card using
您可以使用以下命令定位您的卡
lspci
Configuration 配置
Once you ensured that your hardware supports passthrough, you will need to do
some configuration to enable PCI(e) passthrough.
一旦确认您的硬件支持直通,您需要进行一些配置以启用 PCI(e)直通。
You will have to enable IOMMU support in your BIOS/UEFI. Usually the
corresponding setting is called IOMMU or VT-d, but you should find the exact
option name in the manual of your motherboard.
您需要在 BIOS/UEFI 中启用 IOMMU 支持。通常对应的设置称为 IOMMU 或 VT-d,但您应在主板手册中查找确切的选项名称。
With AMD CPUs IOMMU is enabled by default. With recent kernels (6.8 or newer),
this is also true for Intel CPUs. On older kernels, it is necessary to enable
it on Intel CPUs via the
kernel command line by adding:
对于 AMD CPU,IOMMU 默认已启用。对于较新的内核(6.8 或更新版本),Intel CPU 也同样如此。在较旧的内核中,需要通过在内核命令行添加以下内容来启用 Intel CPU 的 IOMMU:
intel_iommu=on
If your hardware supports IOMMU passthrough mode, enabling this mode might
increase performance.
This is because VMs then bypass the (default) DMA translation normally
performed by the hyper-visor and instead pass DMA requests directly to the
hardware IOMMU. To enable these options, add:
如果您的硬件支持 IOMMU 直通模式,启用此模式可能会提升性能。这是因为虚拟机将绕过由虚拟机监控程序默认执行的 DMA 转换,而是直接将 DMA 请求传递给硬件 IOMMU。要启用这些选项,请添加:
iommu=pt
to the kernel commandline.
到内核命令行。
You have to make sure the following modules are loaded. This can be achieved by
adding them to ‘/etc/modules’.
你必须确保以下模块已加载。可以通过将它们添加到‘/etc/modules’来实现。
|
|
Mediated devices passthrough
中介设备直通 If passing through mediated devices (e.g. vGPUs), the following is not needed.
In these cases, the device will be owned by the appropriate host-driver
directly. |
vfio vfio_iommu_type1 vfio_pci
After changing anything modules related, you need to refresh your
initramfs. On Proxmox VE this can be done by executing:
更改任何与模块相关的内容后,需要刷新你的 initramfs。在 Proxmox VE 上,可以通过执行以下命令来完成:
# update-initramfs -u -k all
To check if the modules are being loaded, the output of
要检查模块是否正在加载,可以查看以下输出
# lsmod | grep vfio
should include the four modules from above.
应包括上述四个模块。
Finally reboot to bring the changes into effect and check that it is indeed
enabled.
最后重启以使更改生效,并检查其是否确实已启用。
# dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
should display that IOMMU, Directed I/O or Interrupt Remapping is
enabled, depending on hardware and kernel the exact message can vary.
应该显示 IOMMU、定向 I/O 或中断重映射已启用,具体信息会根据硬件和内核有所不同。
For notes on how to troubleshoot or verify if IOMMU is working as intended, please
see the Verifying IOMMU Parameters
section in our wiki.
关于如何排查或验证 IOMMU 是否正常工作,请参见我们 wiki 中的“验证 IOMMU 参数”部分。
It is also important that the device(s) you want to pass through
are in a separate IOMMU group. This can be checked with a call to the Proxmox VE
API:
同样重要的是,您想要直通的设备必须处于单独的 IOMMU 组中。可以通过调用 Proxmox VE API 来检查这一点:
# pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
It is okay if the device is in an IOMMU group together with its functions
(e.g. a GPU with the HDMI Audio device) or with its root port or PCI(e) bridge.
如果设备与其功能(例如带有 HDMI 音频设备的 GPU)或其根端口或 PCI(e)桥接器在同一个 IOMMU 组中,也是可以的。
|
|
PCI(e) slots PCI(e) 插槽
Some platforms handle their physical PCI(e) slots differently. So, sometimes
it can help to put the card in a another PCI(e) slot, if you do not get the
desired IOMMU group separation. |
|
|
Unsafe interrupts 不安全中断
For some platforms, it may be necessary to allow unsafe interrupts.
For this add the following line in a file ending with ‘.conf’ file in
/etc/modprobe.d/: options vfio_iommu_type1 allow_unsafe_interrupts=1 Please be aware that this option can make your system unstable. |
GPU Passthrough Notes GPU 直通说明
It is not possible to display the frame buffer of the GPU via NoVNC or SPICE on
the Proxmox VE web interface.
无法通过 Proxmox VE 网页界面的 NoVNC 或 SPICE 显示 GPU 的帧缓冲区。
When passing through a whole GPU or a vGPU and graphic output is wanted, one
has to either physically connect a monitor to the card, or configure a remote
desktop software (for example, VNC or RDP) inside the guest.
当直通整个 GPU 或 vGPU 并需要图形输出时,必须要么物理连接显示器到显卡,要么在虚拟机内配置远程桌面软件(例如 VNC 或 RDP)。
If you want to use the GPU as a hardware accelerator, for example, for
programs using OpenCL or CUDA, this is not required.
如果您想将 GPU 用作硬件加速器,例如用于使用 OpenCL 或 CUDA 的程序,则不需要这样做。
10.9.2. Host Device Passthrough
10.9.2. 主机设备直通
The most used variant of PCI(e) passthrough is to pass through a whole
PCI(e) card, for example a GPU or a network card.
PCI(e)直通最常用的变体是直通整个 PCI(e)卡,例如 GPU 或网卡。
Host Configuration 主机配置
Proxmox VE tries to automatically make the PCI(e) device unavailable for the host.
However, if this doesn’t work, there are two things that can be done:
Proxmox VE 会尝试自动使 PCI(e) 设备对主机不可用。然而,如果这不起作用,可以采取以下两种措施:
-
pass the device IDs to the options of the vfio-pci modules by adding
通过添加,将设备 ID 传递给 vfio-pci 模块的选项options vfio-pci ids=1234:5678,4321:8765
to a .conf file in /etc/modprobe.d/ where 1234:5678 and 4321:8765 are the vendor and device IDs obtained by:
写入 /etc/modprobe.d/ 目录下的 .conf 文件,其中 1234:5678 和 4321:8765 是通过以下命令获得的厂商和设备 ID:# lspci -nn
-
blacklist the driver on the host completely, ensuring that it is free to bind for passthrough, with
在主机上完全将驱动程序列入黑名单,确保其可用于直通绑定,使用blacklist DRIVERNAME
in a .conf file in /etc/modprobe.d/.
在 /etc/modprobe.d/ 目录下的 .conf 文件中。To find the drivername, execute
要查找驱动名称,请执行# lspci -k
for example: 例如:
# lspci -k | grep -A 3 "VGA"
will output something similar to
将输出类似于以下内容01:00.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] GP108 [GeForce GT 1030] Kernel driver in use: <some-module> Kernel modules: <some-module>Now we can blacklist the drivers by writing them into a .conf file:
现在我们可以通过将驱动程序写入 .conf 文件来将其列入黑名单:echo "blacklist <some-module>" >> /etc/modprobe.d/blacklist.conf
For both methods you need to
update the initramfs again and
reboot after that.
对于这两种方法,您都需要再次更新 initramfs,然后重启。
Should this not work, you might need to set a soft dependency to load the gpu
modules before loading vfio-pci. This can be done with the softdep flag, see
also the manpages on modprobe.d for more information.
如果这不起作用,您可能需要设置一个软依赖,以便在加载 vfio-pci 之前加载 GPU 模块。可以使用 softdep 标志来实现,更多信息请参阅 modprobe.d 的手册页。
For example, if you are using drivers named <some-module>:
例如,如果您使用的驱动程序名为 <some-module>:
# echo "softdep <some-module> pre: vfio-pci" >> /etc/modprobe.d/<some-module>.conf
To check if your changes were successful, you can use
要检查您的更改是否成功,您可以使用
# lspci -nnk
and check your device entry. If it says
并检查您的设备条目。如果显示
Kernel driver in use: vfio-pci
or the in use line is missing entirely, the device is ready to be used for
passthrough.
或者“in use”行完全缺失,则该设备已准备好用于直通。
|
|
Mediated devices 中介设备
For mediated devices this line will differ as the device will be owned as the
host driver directly, not vfio-pci. |
VM Configuration 虚拟机配置
When passing through a GPU, the best compatibility is reached when using
q35 as machine type, OVMF (UEFI for VMs) instead of SeaBIOS and PCIe
instead of PCI. Note that if you want to use OVMF for GPU passthrough, the
GPU needs to have an UEFI capable ROM, otherwise use SeaBIOS instead. To check if
the ROM is UEFI capable, see the
PCI Passthrough Examples
wiki.
在直通 GPU 时,最佳兼容性是在使用 q35 作为机器类型、OVMF(虚拟机的 UEFI)替代 SeaBIOS 以及使用 PCIe 替代 PCI 的情况下实现的。请注意,如果您想使用 OVMF 进行 GPU 直通,GPU 需要具备支持 UEFI 的 ROM,否则请改用 SeaBIOS。要检查 ROM 是否支持 UEFI,请参阅 PCI 直通示例维基。
Furthermore, using OVMF, disabling vga arbitration may be possible, reducing the
amount of legacy code needed to be run during boot. To disable vga arbitration:
此外,使用 OVMF 可能可以禁用 VGA 仲裁,从而减少启动时需要运行的遗留代码量。要禁用 VGA 仲裁:
echo "options vfio-pci ids=<vendor-id>,<device-id> disable_vga=1" > /etc/modprobe.d/vfio.conf
replacing the <vendor-id> and <device-id> with the ones obtained from:
将 <vendor-id> 和 <device-id> 替换为从以下命令获得的值:
# lspci -nn
PCI devices can be added in the web interface in the hardware section of the VM.
Alternatively, you can use the command line; set the hostpciX option in the VM
configuration, for example by executing:
PCI 设备可以在虚拟机的硬件部分通过网页界面添加。或者,您也可以使用命令行;在虚拟机配置中设置 hostpciX 选项,例如执行:
# qm set VMID -hostpci0 00:02.0
or by adding a line to the VM configuration file:
或者通过向虚拟机配置文件添加一行:
hostpci0: 00:02.0
If your device has multiple functions (e.g., ‘00:02.0’ and ‘00:02.1’ ),
you can pass them through all together with the shortened syntax ``00:02`.
This is equivalent with checking the ``All Functions` checkbox in the
web interface.
如果您的设备具有多个功能(例如,“00:02.0”和“00:02.1”),您可以使用简化语法``00:02``将它们全部直通。这相当于在网页界面中勾选``所有功能``复选框。
There are some options to which may be necessary, depending on the device
and guest OS:
根据设备和客户操作系统的不同,可能需要一些选项:
-
x-vga=on|off marks the PCI(e) device as the primary GPU of the VM. With this enabled the vga configuration option will be ignored.
x-vga=on|off 将 PCI(e)设备标记为虚拟机的主 GPU。启用此选项后,vga 配置选项将被忽略。 -
pcie=on|off tells Proxmox VE to use a PCIe or PCI port. Some guests/device combination require PCIe rather than PCI. PCIe is only available for q35 machine types.
pcie=on|off 告诉 Proxmox VE 使用 PCIe 或 PCI 端口。某些客户机/设备组合需要 PCIe 而非 PCI。PCIe 仅适用于 q35 机器类型。 -
rombar=on|off makes the firmware ROM visible for the guest. Default is on. Some PCI(e) devices need this disabled.
rombar=on|off 使固件 ROM 对客户机可见。默认是开启的。有些 PCI(e) 设备需要禁用此功能。 -
romfile=<path>, is an optional path to a ROM file for the device to use. This is a relative path under /usr/share/kvm/.
romfile=<path>,是设备使用的 ROM 文件的可选路径。该路径是 /usr/share/kvm/ 下的相对路径。
An example of PCIe passthrough with a GPU set to primary:
一个将 GPU 设置为主设备的 PCIe 直通示例:
# qm set VMID -hostpci0 02:00,pcie=on,x-vga=on
You can override the PCI vendor ID, device ID, and subsystem IDs that will be
seen by the guest. This is useful if your device is a variant with an ID that
your guest’s drivers don’t recognize, but you want to force those drivers to be
loaded anyway (e.g. if you know your device shares the same chipset as a
supported variant).
您可以覆盖来宾系统看到的 PCI 供应商 ID、设备 ID 和子系统 ID。如果您的设备是一个变体,具有来宾驱动程序无法识别的 ID,但您希望强制加载这些驱动程序(例如,如果您知道您的设备与受支持的变体共享相同的芯片组),这将非常有用。
The available options are vendor-id, device-id, sub-vendor-id, and
sub-device-id. You can set any or all of these to override your device’s
default IDs.
可用的选项有 vendor-id、device-id、sub-vendor-id 和 sub-device-id。您可以设置其中任意一个或全部,以覆盖设备的默认 ID。
For example: 例如:
# qm set VMID -hostpci0 02:00,device-id=0x10f6,sub-vendor-id=0x0000
10.9.3. SR-IOV
Another variant for passing through PCI(e) devices is to use the hardware
virtualization features of your devices, if available.
另一种直通 PCI(e)设备的方式是使用设备的硬件虚拟化功能(如果可用)。
|
|
Enabling SR-IOV 启用 SR-IOV
To use SR-IOV, platform support is especially important. It may be necessary
to enable this feature in the BIOS/UEFI first, or to use a specific PCI(e) port
for it to work. In doubt, consult the manual of the platform or contact its
vendor. |
SR-IOV (Single-Root Input/Output Virtualization) enables
a single device to provide multiple VF (Virtual Functions) to the
system. Each of those VF can be used in a different VM, with full hardware
features and also better performance and lower latency than software
virtualized devices.
SR-IOV(单根输入/输出虚拟化)使单个设备能够向系统提供多个 VF(虚拟功能)。每个 VF 都可以在不同的虚拟机中使用,具备完整的硬件功能,同时比软件虚拟化设备具有更好的性能和更低的延迟。
Currently, the most common use case for this are NICs (Network
Interface Card) with SR-IOV support, which can provide multiple VFs per
physical port. This allows using features such as checksum offloading, etc. to
be used inside a VM, reducing the (host) CPU overhead.
目前,最常见的使用场景是支持 SR-IOV 的网卡(网络接口卡),它可以为每个物理端口提供多个 VF。这允许在虚拟机内使用如校验和卸载等功能,从而减少(主机)CPU 的开销。
Host Configuration 主机配置
Generally, there are two methods for enabling virtual functions on a device.
通常,有两种方法可以在设备上启用虚拟功能。
-
sometimes there is an option for the driver module e.g. for some Intel drivers
有时驱动模块会有一个选项,例如某些 Intel 驱动max_vfs=4
which could be put file with .conf ending under /etc/modprobe.d/. (Do not forget to update your initramfs after that)
可以将带有.conf 后缀的文件放置在 /etc/modprobe.d/ 目录下。(别忘了之后更新你的 initramfs)Please refer to your driver module documentation for the exact parameters and options.
请参考你的驱动模块文档以获取准确的参数和选项。 -
The second, more generic, approach is using the sysfs. If a device and driver supports this you can change the number of VFs on the fly. For example, to setup 4 VFs on device 0000:01:00.0 execute:
第二种更通用的方法是使用 sysfs。如果设备和驱动支持此功能,你可以动态更改虚拟功能(VF)的数量。例如,要在设备 0000:01:00.0 上设置 4 个 VF,执行:# echo 4 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs
To make this change persistent you can use the ‘sysfsutils` Debian package. After installation configure it via /etc/sysfs.conf or a `FILE.conf’ in /etc/sysfs.d/.
要使此更改持久生效,可以使用 Debian 包 `sysfsutils`。安装后,通过 /etc/sysfs.conf 或 /etc/sysfs.d/ 中的 `FILE.conf` 进行配置。
VM Configuration 虚拟机配置
After creating VFs, you should see them as separate PCI(e) devices when
outputting them with lspci. Get their ID and pass them through like a
normal PCI(e) device.
创建 VF 后,使用 lspci 输出时应能看到它们作为独立的 PCI(e) 设备。获取它们的 ID,并像普通 PCI(e) 设备一样传递。
10.9.4. Mediated Devices (vGPU, GVT-g)
10.9.4. 中介设备(vGPU,GVT-g)
Mediated devices are another method to reuse features and performance from
physical hardware for virtualized hardware. These are found most common in
virtualized GPU setups such as Intel’s GVT-g and NVIDIA’s vGPUs used in their
GRID technology.
中介设备是另一种将物理硬件的功能和性能复用于虚拟硬件的方法。这种设备最常见于虚拟化 GPU 环境中,例如 Intel 的 GVT-g 和 NVIDIA 在其 GRID 技术中使用的 vGPU。
With this, a physical Card is able to create virtual cards, similar to SR-IOV.
The difference is that mediated devices do not appear as PCI(e) devices in the
host, and are such only suited for using in virtual machines.
通过这种方式,物理显卡能够创建虚拟显卡,类似于 SR-IOV。不同之处在于,中介设备不会作为 PCI(e)设备出现在主机中,因此仅适合在虚拟机中使用。
Host Configuration 主机配置
In general your card’s driver must support that feature, otherwise it will
not work. So please refer to your vendor for compatible drivers and how to
configure them.
一般来说,您的显卡驱动必须支持该功能,否则无法正常工作。请参考您的供应商以获取兼容驱动及其配置方法。
Intel’s drivers for GVT-g are integrated in the Kernel and should work
with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3
v5 and E3 v6 Xeon Processors.
Intel 的 GVT-g 驱动已集成在内核中,应支持第 5、6 和 7 代 Intel Core 处理器,以及 E3 v4、E3 v5 和 E3 v6 Xeon 处理器。
To enable it for Intel Graphics, you have to make sure to load the module
kvmgt (for example via /etc/modules) and to enable it on the
Kernel commandline and add the following parameter:
要为 Intel 显卡启用它,您必须确保加载模块 kvmgt(例如通过 /etc/modules),并在内核命令行中启用它,同时添加以下参数:
i915.enable_gvt=1
After that remember to
update the initramfs,
and reboot your host.
之后请记得更新 initramfs,并重启您的主机。
VM Configuration 虚拟机配置
To use a mediated device, simply specify the mdev property on a hostpciX
VM configuration option.
要使用中介设备,只需在 hostpciX 虚拟机配置选项中指定 mdev 属性。
You can get the supported devices via the sysfs. For example, to list the
supported types for the device 0000:00:02.0 you would simply execute:
您可以通过 sysfs 获取支持的设备。例如,要列出设备 0000:00:02.0 支持的类型,只需执行:
# ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types
Each entry is a directory which contains the following important files:
每个条目都是一个目录,包含以下重要文件:
-
available_instances contains the amount of still available instances of this type, each mdev use in a VM reduces this.
available_instances 包含该类型仍可用的实例数量,每个虚拟机中使用的 mdev 都会减少此数量。 -
description contains a short description about the capabilities of the type
description 包含关于该类型功能的简短描述 -
create is the endpoint to create such a device, Proxmox VE does this automatically for you, if a hostpciX option with mdev is configured.
create 是创建此类设备的端点,如果配置了带有 mdev 的 hostpciX 选项,Proxmox VE 会自动为您完成此操作。
Example configuration with an Intel GVT-g vGPU (Intel Skylake 6700k):
使用 Intel GVT-g vGPU(Intel Skylake 6700k)的示例配置:
# qm set VMID -hostpci0 00:02.0,mdev=i915-GVTg_V5_4
With this set, Proxmox VE automatically creates such a device on VM start, and
cleans it up again when the VM stops.
设置完成后,Proxmox VE 会在虚拟机启动时自动创建此类设备,并在虚拟机停止时将其清理。
10.9.5. Use in Clusters
10.9.5. 集群中的使用
It is also possible to map devices on a cluster level, so that they can be
properly used with HA and hardware changes are detected and non root users
can configure them. See Resource Mapping
for details on that.
也可以在集群级别映射设备,以便它们能够被高可用性(HA)正确使用,硬件更改能够被检测到,且非 root 用户可以配置它们。有关详细信息,请参见资源映射。
10.9.6. vIOMMU (emulated IOMMU)
10.9.6. vIOMMU(仿真 IOMMU)
vIOMMU is the emulation of a hardware IOMMU within a virtual machine, providing
improved memory access control and security for virtualized I/O devices. Using
the vIOMMU option also allows you to pass through PCI(e) devices to level-2 VMs
in level-1 VMs via
Nested Virtualization.
To pass through physical PCI(e) devices from the host to nested VMs, follow the
PCI(e) passthrough instructions.
vIOMMU 是在虚拟机内对硬件 IOMMU 的仿真,提供了改进的内存访问控制和虚拟化 I/O 设备的安全性。使用 vIOMMU 选项还允许通过嵌套虚拟化将 PCI(e)设备传递给一级虚拟机中的二级虚拟机。要将物理 PCI(e)设备从主机传递给嵌套虚拟机,请按照 PCI(e)直通的说明操作。
There are currently two vIOMMU implementations available: Intel and VirtIO.
目前有两种 vIOMMU 实现可用:Intel 和 VirtIO。
Intel vIOMMU
Intel vIOMMU specific VM requirements:
Intel vIOMMU 特定的虚拟机要求:
-
Whether you are using an Intel or AMD CPU on your host, it is important to set intel_iommu=on in the VMs kernel parameters.
无论您的主机使用的是 Intel 还是 AMD CPU,重要的是在虚拟机的内核参数中设置 intel_iommu=on。 -
To use Intel vIOMMU you need to set q35 as the machine type.
要使用 Intel vIOMMU,您需要将机器类型设置为 q35。
If all requirements are met, you can add viommu=intel to the machine parameter
in the configuration of the VM that should be able to pass through PCI devices.
如果满足所有要求,您可以在应能直通 PCI 设备的虚拟机配置中,将 viommu=intel 添加到机器参数中。
# qm set VMID -machine q35,viommu=intel
VirtIO vIOMMU
This vIOMMU implementation is more recent and does not have as many limitations
as Intel vIOMMU but is currently less used in production and less documentated.
这个 vIOMMU 实现较新,没有 Intel vIOMMU 那么多限制,但目前在生产环境中使用较少,文档也较少。
With VirtIO vIOMMU there is no need to set any kernel parameters. It is also
not necessary to use q35 as the machine type, but it is advisable if you want
to use PCIe.
使用 VirtIO vIOMMU 无需设置任何内核参数。也不必使用 q35 作为机器类型,但如果想使用 PCIe,建议使用 q35。
# qm set VMID -machine q35,viommu=virtio
10.10. Hookscripts 10.10. 钩子脚本
You can add a hook script to VMs with the config property hookscript.
您可以通过配置属性 hookscript 向虚拟机添加钩子脚本。
# qm set 100 --hookscript local:snippets/hookscript.pl
It will be called during various phases of the guests lifetime.
For an example and documentation see the example script under
/usr/share/pve-docs/examples/guest-example-hookscript.pl.
该脚本将在客户机生命周期的各个阶段被调用。示例和文档请参见 /usr/share/pve-docs/examples/guest-example-hookscript.pl 下的示例脚本。
10.11. Hibernation 10.11. 休眠
You can suspend a VM to disk with the GUI option Hibernate or with
您可以通过图形界面选项“休眠”将虚拟机挂起到磁盘,或者使用
# qm suspend ID --todisk
That means that the current content of the memory will be saved onto disk
and the VM gets stopped. On the next start, the memory content will be
loaded and the VM can continue where it was left off.
这意味着当前内存的内容将被保存到磁盘上,虚拟机将被停止。下次启动时,内存内容将被加载,虚拟机可以从中断的地方继续运行。
If no target storage for the memory is given, it will be automatically
chosen, the first of:
如果没有指定内存的目标存储,将自动选择,优先选择以下之一:
-
The storage vmstatestorage from the VM config.
虚拟机配置中的存储 vmstatestorage。 -
The first shared storage from any VM disk.
来自任何虚拟机磁盘的第一个共享存储。 -
The first non-shared storage from any VM disk.
来自任何虚拟机磁盘的第一个非共享存储。 -
The storage local as a fallback.
本地存储作为备用。
10.12. Resource Mapping 10.12. 资源映射
When using or referencing local resources (e.g. address of a pci device), using
the raw address or id is sometimes problematic, for example:
在使用或引用本地资源(例如 PCI 设备的地址)时,直接使用原始地址或 ID 有时会带来问题,例如:
-
when using HA, a different device with the same id or path may exist on the target node, and if one is not careful when assigning such guests to HA groups, the wrong device could be used, breaking configurations.
在使用高可用性(HA)时,目标节点上可能存在具有相同 ID 或路径的不同设备,如果在将此类虚拟机分配到 HA 组时不够谨慎,可能会使用错误的设备,导致配置出错。 -
changing hardware can change ids and paths, so one would have to check all assigned devices and see if the path or id is still correct.
更换硬件可能会改变 ID 和路径,因此需要检查所有分配的设备,确认路径或 ID 是否仍然正确。
To handle this better, one can define cluster wide resource mappings, such that
a resource has a cluster unique, user selected identifier which can correspond
to different devices on different hosts. With this, HA won’t start a guest with
a wrong device, and hardware changes can be detected.
为更好地处理此问题,可以定义集群范围的资源映射,使资源拥有集群唯一的、用户选择的标识符,该标识符可以对应不同主机上的不同设备。通过这种方式,HA 不会启动使用错误设备的虚拟机,并且可以检测硬件更改。
Creating such a mapping can be done with the Proxmox VE web GUI under Datacenter
in the relevant tab in the Resource Mappings category, or on the cli with
可以通过 Proxmox VE 的 Web GUI 在数据中心下的资源映射类别的相关标签中创建这样的映射,或者在命令行中使用
# pvesh create /cluster/mapping/<type> <options>
Where <type> is the hardware type (currently either pci, usb or
dir) and <options> are the device mappings and other
configuration parameters.
其中<type>是硬件类型(目前为 pci、usb 或 dir),<options>是设备映射和其他配置参数。
Note that the options must include a map property with all identifying
properties of that hardware, so that it’s possible to verify the hardware did
not change and the correct device is passed through.
请注意,选项中必须包含一个 map 属性,包含该硬件的所有识别属性,以便能够验证硬件未发生变化并且正确的设备被传递。
For example to add a PCI device as device1 with the path 0000:01:00.0 that
has the device id 0001 and the vendor id 0002 on the node node1, and
0000:02:00.0 on node2 you can add it with:
例如,要将路径为 0000:01:00.0、设备 ID 为 0001、供应商 ID 为 0002 的 PCI 设备作为 device1 添加到节点 node1 上,并在节点 node2 上添加路径为 0000:02:00.0 的设备,可以使用以下命令:
# pvesh create /cluster/mapping/pci --id device1 \ --map node=node1,path=0000:01:00.0,id=0002:0001 \ --map node=node2,path=0000:02:00.0,id=0002:0001
You must repeat the map parameter for each node where that device should have
a mapping (note that you can currently only map one USB device per node per
mapping).
您必须为每个应映射该设备的节点重复使用 map 参数(请注意,目前每个映射每个节点只能映射一个 USB 设备)。
Using the GUI makes this much easier, as the correct properties are
automatically picked up and sent to the API.
使用 GUI 会更简单,因为正确的属性会被自动识别并发送到 API。
It’s also possible for PCI devices to provide multiple devices per node with
multiple map properties for the nodes. If such a device is assigned to a guest,
the first free one will be used when the guest is started. The order of the
paths given is also the order in which they are tried, so arbitrary allocation
policies can be implemented.
PCI 设备也可以通过为节点设置多个 map 属性,为每个节点提供多个设备。如果将此类设备分配给来宾,启动来宾时将使用第一个空闲设备。给出的路径顺序也是尝试的顺序,因此可以实现任意的分配策略。
This is useful for devices with SR-IOV, since some times it is not important
which exact virtual function is passed through.
这对于具有 SR-IOV 的设备非常有用,因为有时并不重要传递的是哪个具体的虚拟功能。
You can assign such a device to a guest either with the GUI or with
您可以通过图形界面或使用
# qm set ID -hostpci0 <name>
for PCI devices, or 用于 PCI 设备,或
# qm set <vmid> -usb0 <name>
for USB devices. 用于 USB 设备,将此类设备分配给虚拟机。
Where <vmid> is the guests id and <name> is the chosen name for the created
mapping. All usual options for passing through the devices are allowed, such as
mdev.
其中 <vmid> 是虚拟机的 ID,<name> 是为创建的映射选择的名称。允许使用所有常见的设备直通选项,例如 mdev。
To create mappings Mapping.Modify on /mapping/<type>/<name> is necessary
(where <type> is the device type and <name> is the name of the mapping).
要创建映射,必须在 /mapping/<type>/<name> 上具有 Mapping.Modify 权限(其中 <type> 是设备类型,<name> 是映射名称)。
To use these mappings, Mapping.Use on /mapping/<type>/<name> is necessary
(in addition to the normal guest privileges to edit the configuration).
要使用这些映射,除了具有编辑配置的正常来宾权限外,还必须在 /mapping/<type>/<name> 上具有 Mapping.Use 权限。
There are additional options when defining a cluster wide resource mapping.
Currently there are the following options:
定义集群范围资源映射时有附加选项。目前有以下选项:
-
mdev (PCI): This marks the PCI device as being capable of providing mediated devices. When this is enabled, you can select a type when configuring it on the guest. If multiple PCI devices are selected for the mapping, the mediated device will be created on the first one where there are any available instances of the selected type.
mdev(PCI):这表示该 PCI 设备能够提供中介设备功能。启用后,您可以在客机配置时选择类型。如果为映射选择了多个 PCI 设备,中介设备将在第一个有可用所选类型实例的设备上创建。 -
live-migration-capable (PCI): This marks the PCI device as being capable of being live migrated between nodes. This requires driver and hardware support. Only NVIDIA GPUs with recent kernel are known to support this. Note that live migrating passed through devices is an experimental feature and may not work or cause issues.
支持实时迁移(PCI):这表示该 PCI 设备能够在节点之间进行实时迁移。这需要驱动程序和硬件支持。目前只有带有较新内核的 NVIDIA GPU 已知支持此功能。请注意,实时迁移直通设备是一个实验性功能,可能无法正常工作或导致问题。
10.13. Managing Virtual Machines with qm
10.13. 使用 qm 管理虚拟机
qm is the tool to manage QEMU/KVM virtual machines on Proxmox VE. You can
create and destroy virtual machines, and control execution
(start/stop/suspend/resume). Besides that, you can use qm to set
parameters in the associated config file. It is also possible to
create and delete virtual disks.
qm 是用于管理 Proxmox VE 上 QEMU/KVM 虚拟机的工具。您可以创建和销毁虚拟机,并控制其执行(启动/停止/挂起/恢复)。此外,您还可以使用 qm 设置关联配置文件中的参数。也可以创建和删除虚拟磁盘。
10.13.1. CLI Usage Examples
10.13.1. 命令行使用示例
Using an iso file uploaded on the local storage, create a VM
with a 4 GB IDE disk on the local-lvm storage
使用上传到本地存储的 iso 文件,在 local-lvm 存储上创建一个带有 4 GB IDE 磁盘的虚拟机
# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
Start the new VM 启动新虚拟机
# qm start 300
Send a shutdown request, then wait until the VM is stopped.
发送关机请求,然后等待虚拟机停止运行。
# qm shutdown 300 && qm wait 300
Same as above, but only wait for 40 seconds.
同上,但只等待 40 秒。
# qm shutdown 300 && qm wait 300 -timeout 40
If the VM does not shut down, force-stop it and overrule any running shutdown
tasks. As stopping VMs may incur data loss, use it with caution.
如果虚拟机未关闭,则强制停止它,并覆盖任何正在运行的关闭任务。由于停止虚拟机可能导致数据丢失,请谨慎使用。
# qm stop 300 -overrule-shutdown 1
Destroying a VM always removes it from Access Control Lists and it always
removes the firewall configuration of the VM. You have to activate
--purge, if you want to additionally remove the VM from replication jobs,
backup jobs and HA resource configurations.
销毁虚拟机总是会将其从访问控制列表中移除,并且总是会移除虚拟机的防火墙配置。如果您还想将虚拟机从复制任务、备份任务和高可用资源配置中移除,则必须激活 --purge。
# qm destroy 300 --purge
Move a disk image to a different storage.
将磁盘镜像移动到不同的存储。
# qm move-disk 300 scsi0 other-storage
Reassign a disk image to a different VM. This will remove the disk scsi1 from
the source VM and attaches it as scsi3 to the target VM. In the background
the disk image is being renamed so that the name matches the new owner.
将磁盘映像重新分配给不同的虚拟机。这将从源虚拟机中移除 scsi1 磁盘,并将其作为 scsi3 附加到目标虚拟机。后台会重命名磁盘映像,使名称与新所有者匹配。
# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
10.14. Configuration 10.14. 配置
VM configuration files are stored inside the Proxmox cluster file
system, and can be accessed at /etc/pve/qemu-server/<VMID>.conf.
Like other files stored inside /etc/pve/, they get automatically
replicated to all other cluster nodes.
虚拟机配置文件存储在 Proxmox 集群文件系统内,可以通过 /etc/pve/qemu-server/<VMID>.conf 访问。与存储在 /etc/pve/ 内的其他文件一样,它们会自动复制到所有其他集群节点。
|
|
VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
unique cluster wide. VMID 小于 100 的编号保留用于内部用途,且 VMID 需要在整个集群中唯一。 |
示例虚拟机配置
boot: order=virtio0;net0 cores: 1 sockets: 1 memory: 512 name: webmail ostype: l26 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0 virtio0: local:vm-100-disk-1,size=32G
Those configuration files are simple text files, and you can edit them
using a normal text editor (vi, nano, …). This is sometimes
useful to do small corrections, but keep in mind that you need to
restart the VM to apply such changes.
这些配置文件是简单的文本文件,您可以使用普通的文本编辑器(vi、nano 等)进行编辑。有时这样做对进行小的修正很有用,但请记住,您需要重启虚拟机才能应用这些更改。
For that reason, it is usually better to use the qm command to
generate and modify those files, or do the whole thing using the GUI.
Our toolkit is smart enough to instantaneously apply most changes to
running VM. This feature is called "hot plug", and there is no
need to restart the VM in that case.
因此,通常更好使用 qm 命令来生成和修改这些文件,或者通过图形界面完成整个操作。我们的工具包足够智能,能够即时应用大多数对运行中虚拟机的更改。此功能称为“热插拔”,在这种情况下无需重启虚拟机。
10.14.1. File Format 10.14.1. 文件格式
VM configuration files use a simple colon separated key/value
format. Each line has the following format:
虚拟机配置文件使用简单的冒号分隔的键/值格式。每行的格式如下:
# this is a comment OPTION: value
Blank lines in those files are ignored, and lines starting with a #
character are treated as comments and are also ignored.
这些文件中的空行会被忽略,以#字符开头的行被视为注释,也会被忽略。
10.14.2. Snapshots 10.14.2. 快照
When you create a snapshot, qm stores the configuration at snapshot
time into a separate snapshot section within the same configuration
file. For example, after creating a snapshot called “testsnapshot”,
your configuration file will look like this:
当您创建快照时,qm 会将快照时的配置存储到同一配置文件内的一个单独快照部分。例如,创建名为“testsnapshot”的快照后,您的配置文件将如下所示:
虚拟机配置与快照
memory: 512 swap: 512 parent: testsnaphot ... [testsnaphot] memory: 512 swap: 512 snaptime: 1457170803 ...
There are a few snapshot related properties like parent and
snaptime. The parent property is used to store the parent/child
relationship between snapshots. snaptime is the snapshot creation
time stamp (Unix epoch).
有一些与快照相关的属性,如 parent 和 snaptime。parent 属性用于存储快照之间的父子关系。snaptime 是快照的创建时间戳(Unix 纪元时间)。
You can optionally save the memory of a running VM with the option vmstate.
For details about how the target storage gets chosen for the VM state, see
State storage selection in the chapter
Hibernation.
您可以选择使用 vmstate 选项保存正在运行的虚拟机的内存。有关目标存储如何为虚拟机状态选择的详细信息,请参见第“休眠”章节中的状态存储选择。
10.14.3. Options 10.14.3. 选项
-
acpi: <boolean> (default = 1)
acpi: <布尔值>(默认 = 1) -
Enable/disable ACPI. 启用/禁用 ACPI。
- affinity: <string> affinity: <字符串>
-
List of host cores used to execute guest processes, for example: 0,5,8-11
用于执行客户机进程的主机核心列表,例如:0,5,8-11 - agent: [enabled=]<1|0> [,freeze-fs-on-backup=<1|0>] [,fstrim_cloned_disks=<1|0>] [,type=<virtio|isa>]
-
Enable/disable communication with the QEMU Guest Agent and its properties.
启用/禁用与 QEMU 客户端代理的通信及其属性。-
enabled=<boolean> (default = 0)
enabled=<boolean> (默认 = 0) -
Enable/disable communication with a QEMU Guest Agent (QGA) running in the VM.
启用/禁用与虚拟机中运行的 QEMU 客户端代理(QGA)的通信。 -
freeze-fs-on-backup=<boolean> (default = 1)
freeze-fs-on-backup=<boolean>(默认值 = 1) -
Freeze/thaw guest filesystems on backup for consistency.
在备份时冻结/解冻客户机文件系统以保证一致性。 -
fstrim_cloned_disks=<boolean> (default = 0)
fstrim_cloned_disks=<boolean>(默认值 = 0) -
Run fstrim after moving a disk or migrating the VM.
在移动磁盘或迁移虚拟机后运行 fstrim。 -
type=<isa | virtio> (default = virtio)
type=<isa | virtio>(默认 = virtio) -
Select the agent type
选择代理类型
-
enabled=<boolean> (default = 0)
- amd-sev: [type=]<sev-type> [,allow-smt=<1|0>] [,kernel-hashes=<1|0>] [,no-debug=<1|0>] [,no-key-sharing=<1|0>]
-
Secure Encrypted Virtualization (SEV) features by AMD CPUs
AMD CPU 的安全加密虚拟化(SEV)功能-
allow-smt=<boolean> (default = 1)
allow-smt=<boolean>(默认值 = 1) -
Sets policy bit to allow Simultaneous Multi Threading (SMT) (Ignored unless for SEV-SNP)
设置策略位以允许同时多线程(SMT)(除非用于 SEV-SNP,否则忽略) -
kernel-hashes=<boolean> (default = 0)
kernel-hashes=<boolean>(默认值 = 0) -
Add kernel hashes to guest firmware for measured linux kernel launch
将内核哈希添加到客户机固件中,以实现受测量的 Linux 内核启动 -
no-debug=<boolean> (default = 0)
no-debug=<布尔值>(默认 = 0) -
Sets policy bit to disallow debugging of guest
设置策略位以禁止调试客户机 -
no-key-sharing=<boolean> (default = 0)
no-key-sharing=<布尔值>(默认 = 0) -
Sets policy bit to disallow key sharing with other guests (Ignored for SEV-SNP)
设置策略位以禁止与其他客户机共享密钥(对 SEV-SNP 忽略) - type=<sev-type>
-
Enable standard SEV with type=std or enable experimental SEV-ES with the es option or enable experimental SEV-SNP with the snp option.
使用 type=std 启用标准 SEV,使用 es 选项启用实验性的 SEV-ES,或使用 snp 选项启用实验性的 SEV-SNP。
-
allow-smt=<boolean> (default = 1)
- arch: <aarch64 | x86_64>
-
Virtual processor architecture. Defaults to the host.
虚拟处理器架构。默认为主机架构。 - args: <string>
-
Arbitrary arguments passed to kvm, for example:
传递给 kvm 的任意参数,例如:args: -no-reboot -smbios type=0,vendor=FOO
this option is for experts only.
此选项仅供专家使用。 -
audio0: device=<ich9-intel-hda|intel-hda|AC97> [,driver=<spice|none>]
audio0: 设备=<ich9-intel-hda|intel-hda|AC97> [,驱动=<spice|none>] -
Configure a audio device, useful in combination with QXL/Spice.
配置音频设备,适用于与 QXL/Spice 结合使用。-
device=<AC97 | ich9-intel-hda | intel-hda>
设备=<AC97 | ich9-intel-hda | intel-hda> -
Configure an audio device.
配置音频设备。 -
driver=<none | spice> (default = spice)
driver=<none | spice>(默认 = spice) -
Driver backend for the audio device.
音频设备的驱动后端。
-
device=<AC97 | ich9-intel-hda | intel-hda>
-
autostart: <boolean> (default = 0)
autostart: <boolean>(默认 = 0) -
Automatic restart after crash (currently ignored).
崩溃后自动重启(当前被忽略)。 -
balloon: <integer> (0 - N)
balloon: <整数> (0 - N) -
Amount of target RAM for the VM in MiB. Using zero disables the ballon driver.
虚拟机目标内存大小,单位为 MiB。设置为零则禁用气球驱动。 -
bios: <ovmf | seabios> (default = seabios)
bios: <ovmf | seabios>(默认 = seabios) -
Select BIOS implementation.
选择 BIOS 实现方式。 - boot: [[legacy=]<[acdn]{1,4}>] [,order=<device[;device...]>]
-
Specify guest boot order. Use the order= sub-property as usage with no key or legacy= is deprecated.
指定客户机启动顺序。使用 order= 子属性,使用无键或 legacy= 的方式已被弃用。-
legacy=<[acdn]{1,4}> (default = cdn)
legacy=<[acdn]{1,4}> (默认 = cdn) -
Boot on floppy (a), hard disk (c), CD-ROM (d), or network (n). Deprecated, use order= instead.
从软盘 (a)、硬盘 (c)、光驱 (d) 或网络 (n) 启动。已弃用,请改用 order=。 -
order=<device[;device...]>
order=<设备[;设备...]> -
The guest will attempt to boot from devices in the order they appear here.
客户机将尝试按照此处列出的顺序从设备启动。Disks, optical drives and passed-through storage USB devices will be directly booted from, NICs will load PXE, and PCIe devices will either behave like disks (e.g. NVMe) or load an option ROM (e.g. RAID controller, hardware NIC).
磁盘、光驱和直通的存储 USB 设备将直接启动,网卡将加载 PXE,PCIe 设备则要么表现得像磁盘(例如 NVMe),要么加载选项 ROM(例如 RAID 控制器、硬件网卡)。Note that only devices in this list will be marked as bootable and thus loaded by the guest firmware (BIOS/UEFI). If you require multiple disks for booting (e.g. software-raid), you need to specify all of them here.
请注意,只有此列表中的设备会被标记为可启动设备,从而由客户机固件(BIOS/UEFI)加载。如果您需要多个磁盘启动(例如软件 RAID),则需要在此处指定所有磁盘。Overrides the deprecated legacy=[acdn]* value when given.
当提供时,覆盖已弃用的 legacy=[acdn]* 值。
-
legacy=<[acdn]{1,4}> (default = cdn)
- bootdisk: (ide|sata|scsi|virtio)\d+
-
Enable booting from specified disk. Deprecated: Use boot: order=foo;bar instead.
启用从指定磁盘启动。已弃用:请改用 boot: order=foo;bar。 - cdrom: <volume>
-
This is an alias for option -ide2
这是选项 -ide2 的别名 - cicustom: [meta=<volume>] [,network=<volume>] [,user=<volume>] [,vendor=<volume>]
-
cloud-init: Specify custom files to replace the automatically generated ones at start.
cloud-init:指定自定义文件以替换启动时自动生成的文件。- meta=<volume>
-
Specify a custom file containing all meta data passed to the VM via" ." cloud-init. This is provider specific meaning configdrive2 and nocloud differ.
指定一个自定义文件,包含通过 cloud-init 传递给虚拟机的所有元数据。此项依赖于提供者,意味着 configdrive2 和 nocloud 不同。 - network=<volume>
-
To pass a custom file containing all network data to the VM via cloud-init.
通过 cloud-init 向虚拟机传递包含所有网络数据的自定义文件。 - user=<volume>
-
To pass a custom file containing all user data to the VM via cloud-init.
通过 cloud-init 向虚拟机传递包含所有用户数据的自定义文件。 - vendor=<volume>
-
To pass a custom file containing all vendor data to the VM via cloud-init.
通过 cloud-init 向虚拟机传递包含所有厂商数据的自定义文件。
- cipassword: <string>
-
cloud-init: Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords.
cloud-init:分配给用户的密码。通常不建议使用此方法。请改用 ssh 密钥。同时请注意,较旧版本的 cloud-init 不支持哈希密码。 -
citype: <configdrive2 | nocloud | opennebula>
citype:<configdrive2 | nocloud | opennebula> -
Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows.
指定 cloud-init 的配置格式。默认值取决于配置的操作系统类型(ostype)。我们对 Linux 使用 nocloud 格式,对 Windows 使用 configdrive2 格式。 -
ciupgrade: <boolean> (default = 1)
ciupgrade:<boolean>(默认值 = 1) -
cloud-init: do an automatic package upgrade after the first boot.
cloud-init:首次启动后自动升级包。 - ciuser: <string> ciuser:<字符串>
-
cloud-init: User name to change ssh keys and password for instead of the image’s configured default user.
cloud-init:用于更改 SSH 密钥和密码的用户名,替代镜像中配置的默认用户。 -
cores: <integer> (1 - N) (default = 1)
cores:<整数>(1 - N)(默认 = 1) -
The number of cores per socket.
每个插槽的核心数。 -
cpu: [[cputype=]<string>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<enum>]
cpu: [[cputype=]<字符串>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<枚举>] -
Emulated CPU type. 模拟的 CPU 类型。
-
cputype=<string> (default = kvm64)
cputype=<字符串>(默认 = kvm64) -
Emulated CPU type. Can be default or custom name (custom model names must be prefixed with custom-).
模拟的 CPU 类型。可以是默认值或自定义名称(自定义模型名称必须以 custom-为前缀)。 - flags=<+FLAG[;-FLAG...]>
-
List of additional CPU flags separated by ;. Use +FLAG to enable, -FLAG to disable a flag. Custom CPU models can specify any flag supported by QEMU/KVM, VM-specific flags must be from the following set for security reasons: pcid, spec-ctrl, ibpb, ssbd, virt-ssbd, amd-ssbd, amd-no-ssb, pdpe1gb, md-clear, hv-tlbflush, hv-evmcs, aes
以分号分隔的额外 CPU 标志列表。使用+FLAG 启用标志,-FLAG 禁用标志。自定义 CPU 模型可以指定 QEMU/KVM 支持的任何标志,出于安全原因,虚拟机特定的标志必须来自以下集合:pcid、spec-ctrl、ibpb、ssbd、virt-ssbd、amd-ssbd、amd-no-ssb、pdpe1gb、md-clear、hv-tlbflush、hv-evmcs、aes -
hidden=<boolean> (default = 0)
hidden=<boolean>(默认值 = 0) -
Do not identify as a KVM virtual machine.
不要将其识别为 KVM 虚拟机。 - hv-vendor-id=<vendor-id>
-
The Hyper-V vendor ID. Some drivers or programs inside Windows guests need a specific ID.
Hyper-V 供应商 ID。某些驱动程序或 Windows 客户机内的程序需要特定的 ID。 - phys-bits=<8-64|host>
-
The physical memory address bits that are reported to the guest OS. Should be smaller or equal to the host’s. Set to host to use value from host CPU, but note that doing so will break live migration to CPUs with other values.
报告给客户操作系统的物理内存地址位数。应小于或等于主机的位数。设置为 host 时将使用主机 CPU 的值,但请注意,这样设置会导致无法进行到具有其他值的 CPU 的实时迁移。 -
reported-model=<486 | Broadwell | Broadwell-IBRS | Broadwell-noTSX | Broadwell-noTSX-IBRS | Cascadelake-Server | Cascadelake-Server-noTSX | Cascadelake-Server-v2 | Cascadelake-Server-v4 | Cascadelake-Server-v5 | Conroe | Cooperlake | Cooperlake-v2 | EPYC | EPYC-Genoa | EPYC-IBPB | EPYC-Milan | EPYC-Milan-v2 | EPYC-Rome | EPYC-Rome-v2 | EPYC-Rome-v3 | EPYC-Rome-v4 | EPYC-v3 | EPYC-v4 | GraniteRapids | Haswell | Haswell-IBRS | Haswell-noTSX | Haswell-noTSX-IBRS | Icelake-Client | Icelake-Client-noTSX | Icelake-Server | Icelake-Server-noTSX | Icelake-Server-v3 | Icelake-Server-v4 | Icelake-Server-v5 | Icelake-Server-v6 | IvyBridge | IvyBridge-IBRS | KnightsMill | Nehalem | Nehalem-IBRS | Opteron_G1 | Opteron_G2 | Opteron_G3 | Opteron_G4 | Opteron_G5 | Penryn | SandyBridge | SandyBridge-IBRS | SapphireRapids | SapphireRapids-v2 | Skylake-Client | Skylake-Client-IBRS | Skylake-Client-noTSX-IBRS | Skylake-Client-v4 | Skylake-Server | Skylake-Server-IBRS | Skylake-Server-noTSX-IBRS | Skylake-Server-v4 | Skylake-Server-v5 | Westmere | Westmere-IBRS | athlon | core2duo | coreduo | host | kvm32 | kvm64 | max | pentium | pentium2 | pentium3 | phenom | qemu32 | qemu64> (default = kvm64)
reported-model=<486 | Broadwell | Broadwell-IBRS | Broadwell-noTSX | Broadwell-noTSX-IBRS | Cascadelake-Server | Cascadelake-Server-noTSX | Cascadelake-Server-v2 | Cascadelake-Server-v4 | Cascadelake-Server-v5 | Conroe | Cooperlake | Cooperlake-v2 | EPYC | EPYC-Genoa | EPYC-IBPB | EPYC-Milan | EPYC-Milan-v2 | EPYC-Rome | EPYC-Rome-v2 | EPYC-Rome-v3 | EPYC-Rome-v4 | EPYC-v3 | EPYC-v4 | GraniteRapids | Haswell | Haswell-IBRS | Haswell-noTSX | Haswell-noTSX-IBRS | Icelake-Client | Icelake-Client-noTSX | Icelake-Server | Icelake-Server-noTSX | Icelake-Server-v3 | Icelake-Server-v4 | Icelake-Server-v5 | Icelake-Server-v6 | IvyBridge | IvyBridge-IBRS | KnightsMill | Nehalem | Nehalem-IBRS | Opteron_G1 | Opteron_G2 | Opteron_G3 | Opteron_G4 | Opteron_G5 | Penryn | SandyBridge | SandyBridge-IBRS | SapphireRapids | SapphireRapids-v2 | Skylake-Client | Skylake-Client-IBRS | Skylake-Client-noTSX-IBRS | Skylake-Client-v4 | Skylake-Server | Skylake-Server-IBRS | Skylake-Server-noTSX-IBRS | Skylake-Server-v4 | Skylake-Server-v5 | Westmere | Westmere-IBRS | athlon | core2duo | coreduo | host | kvm32 | kvm64 | max | pentium | pentium2 | pentium3 | phenom | qemu32 | qemu64>(默认 = kvm64) -
CPU model and vendor to report to the guest. Must be a QEMU/KVM supported model. Only valid for custom CPU model definitions, default models will always report themselves to the guest OS.
报告给客户机的 CPU 型号和厂商。必须是 QEMU/KVM 支持的型号。仅对自定义 CPU 型号定义有效,默认型号将始终向客户机操作系统报告自身信息。
-
cputype=<string> (default = kvm64)
-
cpulimit: <number> (0 - 128) (default = 0)
cpulimit: <数字>(0 - 128)(默认 = 0) -
Limit of CPU usage.
CPU 使用限制。If the computer has 2 CPUs, it has total of 2 CPU time. Value 0 indicates no CPU limit.
如果计算机有 2 个 CPU,则总共有 2 个 CPU 时间。值为 0 表示没有 CPU 限制。 -
cpuunits: <integer> (1 - 262144) (default = cgroup v1: 1024, cgroup v2: 100)
cpuunits: <整数> (1 - 262144) (默认值 = cgroup v1: 1024, cgroup v2: 100) -
CPU weight for a VM. Argument is used in the kernel fair scheduler. The larger the number is, the more CPU time this VM gets. Number is relative to weights of all the other running VMs.
虚拟机的 CPU 权重。该参数用于内核公平调度器。数值越大,该虚拟机获得的 CPU 时间越多。该数值相对于所有其他正在运行的虚拟机的权重而言。 - description: <string> description: <字符串>
-
Description for the VM. Shown in the web-interface VM’s summary. This is saved as comment inside the configuration file.
虚拟机的描述。在网页界面虚拟机摘要中显示。此信息作为注释保存在配置文件中。 - efidisk0: [file=]<volume> [,efitype=<2m|4m>] [,format=<enum>] [,pre-enrolled-keys=<1|0>] [,size=<DiskSize>]
-
Configure a disk for storing EFI vars.
配置用于存储 EFI 变量的磁盘。-
efitype=<2m | 4m> (default = 2m)
efitype=<2m | 4m>(默认 = 2m) -
Size and type of the OVMF EFI vars. 4m is newer and recommended, and required for Secure Boot. For backwards compatibility, 2m is used if not otherwise specified. Ignored for VMs with arch=aarch64 (ARM).
OVMF EFI 变量的大小和类型。4m 是较新的版本,推荐使用,并且是安全启动所必需的。为了向后兼容,如果未另行指定,则使用 2m。对于 arch=aarch64(ARM)的虚拟机,此设置被忽略。 - file=<volume>
-
The drive’s backing volume.
驱动器的后备卷。 - format=<cloop | qcow | qcow2 | qed | raw | vmdk>
-
The drive’s backing file’s data format.
驱动器后备文件的数据格式。 -
pre-enrolled-keys=<boolean> (default = 0)
pre-enrolled-keys=<boolean>(默认值 = 0) -
Use am EFI vars template with distribution-specific and Microsoft Standard keys enrolled, if used with efitype=4m. Note that this will enable Secure Boot by default, though it can still be turned off from within the VM.
如果与 efitype=4m 一起使用,则使用包含发行版特定和 Microsoft 标准密钥的 EFI 变量模板。请注意,这将默认启用安全启动,但仍可以在虚拟机内关闭。 - size=<DiskSize>
-
Disk size. This is purely informational and has no effect.
磁盘大小。此信息仅供参考,不会产生任何影响。
-
efitype=<2m | 4m> (default = 2m)
- freeze: <boolean>
-
Freeze CPU at startup (use c monitor command to start execution).
启动时冻结 CPU(使用 c 监视器命令开始执行)。 - hookscript: <string>
-
Script that will be executed during various steps in the vms lifetime.
将在虚拟机生命周期的各个阶段执行的脚本。 -
hostpci[n]: [[host=]<HOSTPCIID[;HOSTPCIID2...]>] [,device-id=<hex id>] [,legacy-igd=<1|0>] [,mapping=<mapping-id>] [,mdev=<string>] [,pcie=<1|0>] [,rombar=<1|0>] [,romfile=<string>] [,sub-device-id=<hex id>] [,sub-vendor-id=<hex id>] [,vendor-id=<hex id>] [,x-vga=<1|0>]
hostpci[n]: [[host=]<HOSTPCIID[;HOSTPCIID2...]>] [,device-id=<十六进制 ID>] [,legacy-igd=<1|0>] [,mapping=<映射 ID>] [,mdev=<字符串>] [,pcie=<1|0>] [,rombar=<1|0>] [,romfile=<字符串>] [,sub-device-id=<十六进制 ID>] [,sub-vendor-id=<十六进制 ID>] [,vendor-id=<十六进制 ID>] [,x-vga=<1|0>] -
Map host PCI devices into guest.
将主机 PCI 设备映射到虚拟机中。This option allows direct access to host hardware. So it is no longer possible to migrate such machines - use with special care.
此选项允许直接访问主机硬件。因此,此类虚拟机无法迁移——请谨慎使用。Experimental! User reported problems with this option.
实验性功能!用户报告使用此选项时存在问题。- device-id=<hex id> device-id=<十六进制 ID>
-
Override PCI device ID visible to guest
覆盖对客户机可见的 PCI 设备 ID - host=<HOSTPCIID[;HOSTPCIID2...]>
-
Host PCI device pass through. The PCI ID of a host’s PCI device or a list of PCI virtual functions of the host. HOSTPCIID syntax is:
主机 PCI 设备直通。主机 PCI 设备的 PCI ID 或主机 PCI 虚拟功能列表。HOSTPCIID 的语法为:bus:dev.func (hexadecimal numbers)
总线:设备.功能(十六进制数字)You can use the lspci command to list existing PCI devices.
您可以使用 lspci 命令列出现有的 PCI 设备。Either this or the mapping key must be set.
必须设置此项或映射键中的一项。 -
legacy-igd=<boolean> (default = 0)
legacy-igd=<布尔值>(默认 = 0) -
Pass this device in legacy IGD mode, making it the primary and exclusive graphics device in the VM. Requires pc-i440fx machine type and VGA set to none.
以传统 IGD 模式传递此设备,使其成为虚拟机中的主要且唯一的图形设备。需要 pc-i440fx 机器类型且 VGA 设置为 none。 - mapping=<mapping-id>
-
The ID of a cluster wide mapping. Either this or the default-key host must be set.
集群范围映射的 ID。必须设置此项或 default-key host 中的一项。 - mdev=<string>
-
The type of mediated device to use. An instance of this type will be created on startup of the VM and will be cleaned up when the VM stops.
要使用的中介设备类型。此类型的实例将在虚拟机启动时创建,并在虚拟机停止时清理。 -
pcie=<boolean> (default = 0)
pcie=<boolean>(默认值 = 0) -
Choose the PCI-express bus (needs the q35 machine model).
选择 PCI-express 总线(需要 q35 机器模型)。 -
rombar=<boolean> (default = 1)
rombar=<boolean>(默认值 = 1) -
Specify whether or not the device’s ROM will be visible in the guest’s memory map.
指定设备的 ROM 是否在客户机的内存映射中可见。 - romfile=<string> romfile=<字符串>
-
Custom pci device rom filename (must be located in /usr/share/kvm/).
自定义 PCI 设备 ROM 文件名(必须位于/usr/share/kvm/目录下)。 - sub-device-id=<hex id> sub-device-id=<十六进制 ID>
-
Override PCI subsystem device ID visible to guest
覆盖对客户机可见的 PCI 子系统设备 ID - sub-vendor-id=<hex id> sub-vendor-id=<十六进制 ID>
-
Override PCI subsystem vendor ID visible to guest
覆盖对客户机可见的 PCI 子系统厂商 ID - vendor-id=<hex id> vendor-id=<十六进制 ID>
-
Override PCI vendor ID visible to guest
覆盖对客户机可见的 PCI 供应商 ID -
x-vga=<boolean> (default = 0)
x-vga=<布尔值>(默认 = 0) -
Enable vfio-vga device support.
启用 vfio-vga 设备支持。
-
hotplug: <string> (default = network,disk,usb)
hotplug: <字符串>(默认 = network,disk,usb) -
Selectively enable hotplug features. This is a comma separated list of hotplug features: network, disk, cpu, memory, usb and cloudinit. Use 0 to disable hotplug completely. Using 1 as value is an alias for the default network,disk,usb. USB hotplugging is possible for guests with machine version >= 7.1 and ostype l26 or windows > 7.
选择性启用热插拔功能。这是一个以逗号分隔的热插拔功能列表:network、disk、cpu、memory、usb 和 cloudinit。使用 0 完全禁用热插拔。使用 1 作为值是默认启用 network、disk、usb 的别名。对于机器版本 >= 7.1 且操作系统类型为 l26 或 Windows > 7 的虚拟机,支持 USB 热插拔。 - hugepages: <1024 | 2 | any>
-
Enable/disable hugepages memory.
启用/禁用大页内存。 - ide[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,model=<model>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
-
Use volume as IDE hard disk or CD-ROM (n is 0 to 3).
将卷用作 IDE 硬盘或光驱(n 为 0 到 3)。- aio=<io_uring | native | threads>
-
AIO type to use.
要使用的 AIO 类型。 - backup=<boolean>
-
Whether the drive should be included when making backups.
是否在备份时包含该驱动器。 - bps=<bps>
-
Maximum r/w speed in bytes per second.
最大读写速度,单位为字节每秒。 -
bps_max_length=<seconds>
bps_max_length=<秒> -
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间,单位为秒。 - bps_rd=<bps>
-
Maximum read speed in bytes per second.
最大读取速度,单位为字节每秒。 -
bps_rd_max_length=<seconds>
bps_rd_max_length=<秒> -
Maximum length of read I/O bursts in seconds.
读取 I/O 突发的最大持续时间,单位为秒。 - bps_wr=<bps> bps_wr=<字节每秒>
-
Maximum write speed in bytes per second.
最大写入速度,单位为字节每秒。 -
bps_wr_max_length=<seconds>
bps_wr_max_length=<秒> -
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大时长,单位为秒。 - cache=<directsync | none | unsafe | writeback | writethrough>
-
The drive’s cache mode
驱动器的缓存模式 - cyls=<integer> cyls=<整数>
-
Force the drive’s physical geometry to have a specific cylinder count.
强制驱动器的物理几何结构具有特定的柱面数。 - detect_zeroes=<boolean> detect_zeroes=<布尔值>
-
Controls whether to detect and try to optimize writes of zeroes.
控制是否检测并尝试优化零值写入。 - discard=<ignore | on>
-
Controls whether to pass discard/trim requests to the underlying storage.
控制是否将 discard/trim 请求传递到底层存储。 - file=<volume>
-
The drive’s backing volume.
驱动器的后备卷。 - format=<cloop | qcow | qcow2 | qed | raw | vmdk>
-
The drive’s backing file’s data format.
驱动器的后备文件的数据格式。 - heads=<integer>
-
Force the drive’s physical geometry to have a specific head count.
强制驱动器的物理几何结构具有特定的磁头数量。 - iops=<iops>
-
Maximum r/w I/O in operations per second.
最大读/写 I/O 操作次数(每秒)。 - iops_max=<iops>
-
Maximum unthrottled r/w I/O pool in operations per second.
最大不受限制的读/写 I/O 池操作次数(每秒)。 -
iops_max_length=<seconds>
iops_max_length=<秒> -
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间,单位为秒。 - iops_rd=<iops>
-
Maximum read I/O in operations per second.
最大读取 I/O 操作次数,单位为每秒操作数。 - iops_rd_max=<iops>
-
Maximum unthrottled read I/O pool in operations per second.
最大非限制读 I/O 池,单位为每秒操作次数。 - iops_rd_max_length=<seconds>
-
Maximum length of read I/O bursts in seconds.
读 I/O 突发的最大持续时间,单位为秒。 - iops_wr=<iops>
-
Maximum write I/O in operations per second.
最大写入 I/O 操作次数(每秒)。 - iops_wr_max=<iops>
-
Maximum unthrottled write I/O pool in operations per second.
最大非限制写入 I/O 池操作次数(每秒)。 -
iops_wr_max_length=<seconds>
iops_wr_max_length=<秒> -
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大持续时间,单位为秒。 - mbps=<mbps> mbps=<兆字节每秒>
-
Maximum r/w speed in megabytes per second.
最大读/写速度,单位为兆字节每秒。 - mbps_max=<mbps>
-
Maximum unthrottled r/w pool in megabytes per second.
最大不受限制的读写池速度,单位为兆字节每秒。 - mbps_rd=<mbps>
-
Maximum read speed in megabytes per second.
最大读取速度,单位为兆字节每秒。 - mbps_rd_max=<mbps>
-
Maximum unthrottled read pool in megabytes per second.
最大不受限制的读取池速度,单位为兆字节每秒。 - mbps_wr=<mbps>
-
Maximum write speed in megabytes per second.
最大写入速度,单位为兆字节每秒。 - mbps_wr_max=<mbps>
-
Maximum unthrottled write pool in megabytes per second.
最大不受限写入池,单位为兆字节每秒。 -
media=<cdrom | disk> (default = disk)
media=<cdrom | disk>(默认 = disk) -
The drive’s media type.
驱动器的介质类型。 - model=<model>
-
The drive’s reported model name, url-encoded, up to 40 bytes long.
驱动器报告的型号名称,经过 URL 编码,最长 40 字节。 -
replicate=<boolean> (default = 1)
replicate=<boolean> (默认 = 1) -
Whether the drive should considered for replication jobs.
是否将该驱动器考虑用于复制任务。 - rerror=<ignore | report | stop>
-
Read error action. 读取错误操作。
- secs=<integer> secs=<整数>
-
Force the drive’s physical geometry to have a specific sector count.
强制驱动器的物理几何结构具有特定的扇区数。 - serial=<serial>
-
The drive’s reported serial number, url-encoded, up to 20 bytes long.
驱动器报告的序列号,经过 URL 编码,最长 20 字节。 -
shared=<boolean> (default = 0)
shared=<boolean>(默认值 = 0) -
Mark this locally-managed volume as available on all nodes.
将此本地管理的卷标记为在所有节点上可用。This option does not share the volume automatically, it assumes it is shared already!
此选项不会自动共享卷,它假设卷已经被共享! - size=<DiskSize>
-
Disk size. This is purely informational and has no effect.
磁盘大小。此信息仅供参考,不会产生任何影响。 - snapshot=<boolean>
-
Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown.
控制 qemu 的快照模式功能。如果激活,对磁盘所做的更改是临时的,虚拟机关闭时将被丢弃。 - ssd=<boolean>
-
Whether to expose this drive as an SSD, rather than a rotational hard disk.
是否将此驱动器作为 SSD 暴露,而不是旋转硬盘。 - trans=<auto | lba | none>
-
Force disk geometry bios translation mode.
强制磁盘几何 BIOS 翻译模式。 - werror=<enospc | ignore | report | stop>
-
Write error action. 写入错误操作。
- wwn=<wwn>
-
The drive’s worldwide name, encoded as 16 bytes hex string, prefixed by 0x.
驱动器的全球唯一名称,编码为 16 字节的十六进制字符串,前缀为 0x。
-
ipconfig[n]: [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4Format/CIDR>] [,ip6=<IPv6Format/CIDR>]
ipconfig[n]: [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4 格式/CIDR>] [,ip6=<IPv6 格式/CIDR>] -
cloud-init: Specify IP addresses and gateways for the corresponding interface.
cloud-init:为相应的接口指定 IP 地址和网关。IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.
IP 地址使用 CIDR 表示法,网关是可选的,但需要指定相同类型的 IP。The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit gateway should be provided. For IPv6 the special string auto can be used to use stateless autoconfiguration. This requires cloud-init 19.4 or newer.
特殊字符串 dhcp 可用于 IP 地址以使用 DHCP,在这种情况下不应提供显式的网关。对于 IPv6,可以使用特殊字符串 auto 来使用无状态自动配置。这需要 cloud-init 19.4 或更高版本。If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using dhcp on IPv4.
如果启用了 cloud-init 且未指定 IPv4 或 IPv6 地址,则默认使用 IPv4 的 dhcp。- gw=<GatewayIPv4>
-
Default gateway for IPv4 traffic.
IPv4 流量的默认网关。Requires option(s): ip 需要选项:ip - gw6=<GatewayIPv6>
-
Default gateway for IPv6 traffic.
IPv6 流量的默认网关。Requires option(s): ip6 需要选项:ip6 -
ip=<IPv4Format/CIDR> (default = dhcp)
ip=<IPv4 格式/CIDR>(默认 = dhcp) -
IPv4 address in CIDR format.
CIDR 格式的 IPv4 地址。 -
ip6=<IPv6Format/CIDR> (default = dhcp)
ip6=<IPv6 格式/CIDR>(默认 = dhcp) -
IPv6 address in CIDR format.
CIDR 格式的 IPv6 地址。
-
ivshmem: size=<integer> [,name=<string>]
ivshmem:size=<整数> [,name=<字符串>] -
Inter-VM shared memory. Useful for direct communication between VMs, or to the host.
虚拟机间共享内存。适用于虚拟机之间或与主机之间的直接通信。- name=<string> name=<字符串>
-
The name of the file. Will be prefixed with pve-shm-. Default is the VMID. Will be deleted when the VM is stopped.
文件名。将以 pve-shm- 作为前缀。默认是虚拟机 ID。虚拟机停止时该文件将被删除。 -
size=<integer> (1 - N)
size=<整数> (1 - N) -
The size of the file in MB.
文件大小,单位为 MB。
-
keephugepages: <boolean> (default = 0)
keephugepages: <布尔值> (默认 = 0) -
Use together with hugepages. If enabled, hugepages will not not be deleted after VM shutdown and can be used for subsequent starts.
与 hugepages 一起使用。如果启用,hugepages 在虚拟机关闭后不会被删除,可以用于后续启动。 -
keyboard: <da | de | de-ch | en-gb | en-us | es | fi | fr | fr-be | fr-ca | fr-ch | hu | is | it | ja | lt | mk | nl | no | pl | pt | pt-br | sl | sv | tr>
键盘:<da | de | de-ch | en-gb | en-us | es | fi | fr | fr-be | fr-ca | fr-ch | hu | is | it | ja | lt | mk | nl | no | pl | pt | pt-br | sl | sv | tr> -
Keyboard layout for VNC server. This option is generally not required and is often better handled from within the guest OS.
VNC 服务器的键盘布局。此选项通常不需要,且通常更适合在客户操作系统内进行设置。 -
kvm: <boolean> (default = 1)
kvm:<boolean>(默认 = 1) -
Enable/disable KVM hardware virtualization.
启用/禁用 KVM 硬件虚拟化。 - localtime: <boolean>
-
Set the real time clock (RTC) to local time. This is enabled by default if the ostype indicates a Microsoft Windows OS.
将实时时钟(RTC)设置为本地时间。如果操作系统类型指示为 Microsoft Windows 操作系统,则默认启用此功能。 - lock: <backup | clone | create | migrate | rollback | snapshot | snapshot-delete | suspended | suspending>
-
Lock/unlock the VM. 锁定/解锁虚拟机。
-
machine: [[type=]<machine type>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>]
machine: [[type=]<机器类型>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>] -
Specify the QEMU machine.
指定 QEMU 机器。- enable-s3=<boolean> enable-s3=<布尔值>
-
Enables S3 power state. Defaults to false beginning with machine types 9.2+pve1, true before.
启用 S3 电源状态。从机器类型 9.2+pve1 开始默认为 false,之前为 true。 - enable-s4=<boolean>
-
Enables S4 power state. Defaults to false beginning with machine types 9.2+pve1, true before.
启用 S4 电源状态。从机器类型 9.2+pve1 开始,默认值为 false,之前为 true。 - type=<machine type>
-
Specifies the QEMU machine type.
指定 QEMU 机器类型。 - viommu=<intel | virtio>
-
Enable and set guest vIOMMU variant (Intel vIOMMU needs q35 to be set as machine type).
启用并设置客户机 vIOMMU 变体(Intel vIOMMU 需要将机器类型设置为 q35)。
- memory: [current=]<integer>
-
Memory properties. 内存属性。
-
current=<integer> (16 - N) (default = 512)
current=<整数> (16 - N) (默认 = 512) -
Current amount of online RAM for the VM in MiB. This is the maximum available memory when you use the balloon device.
虚拟机当前在线的内存大小,单位为 MiB。当使用气球设备时,这是可用的最大内存。
-
current=<integer> (16 - N) (default = 512)
-
migrate_downtime: <number> (0 - N) (default = 0.1)
migrate_downtime: <数字> (0 - N) (默认 = 0.1) -
Set maximum tolerated downtime (in seconds) for migrations. Should the migration not be able to converge in the very end, because too much newly dirtied RAM needs to be transferred, the limit will be increased automatically step-by-step until migration can converge.
设置迁移时允许的最大停机时间(秒)。如果迁移在最后阶段无法收敛,因为需要传输过多新脏页内存,限制将自动逐步增加,直到迁移能够收敛。 -
migrate_speed: <integer> (0 - N) (default = 0)
migrate_speed: <整数> (0 - N) (默认 = 0) -
Set maximum speed (in MB/s) for migrations. Value 0 is no limit.
设置迁移的最大速度(单位:MB/s)。值为 0 表示无限制。 - name: <string> name: <字符串>
-
Set a name for the VM. Only used on the configuration web interface.
为虚拟机设置名称。仅在配置网页界面中使用。 - nameserver: <string>
-
cloud-init: Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 服务器 IP 地址。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 - net[n]: [model=]<enum> [,bridge=<bridge>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,mtu=<integer>] [,queues=<integer>] [,rate=<number>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>]
-
Specify network devices.
指定网络设备。- bridge=<bridge>
-
Bridge to attach the network device to. The Proxmox VE standard bridge is called vmbr0.
要连接网络设备的桥。Proxmox VE 标准桥称为 vmbr0。If you do not specify a bridge, we create a kvm user (NATed) network device, which provides DHCP and DNS services. The following addresses are used:
如果您未指定桥,我们将创建一个 kvm 用户(NAT)网络设备,提供 DHCP 和 DNS 服务。使用以下地址:10.0.2.2 Gateway 10.0.2.3 DNS Server 10.0.2.4 SMB Server
The DHCP server assign addresses to the guest starting from 10.0.2.15.
DHCP 服务器从 10.0.2.15 开始为客户机分配地址。 - firewall=<boolean>
-
Whether this interface should be protected by the firewall.
该接口是否应受到防火墙的保护。 - link_down=<boolean>
-
Whether this interface should be disconnected (like pulling the plug).
该接口是否应断开连接(如拔掉插头)。 - macaddr=<XX:XX:XX:XX:XX:XX>
-
A common MAC address with the I/G (Individual/Group) bit not set.
一个常见的 MAC 地址,I/G(单个/组)位未设置。 - model=<e1000 | e1000-82540em | e1000-82544gc | e1000-82545em | e1000e | i82551 | i82557b | i82559er | ne2k_isa | ne2k_pci | pcnet | rtl8139 | virtio | vmxnet3>
-
Network Card Model. The virtio model provides the best performance with very low CPU overhead. If your guest does not support this driver, it is usually best to use e1000.
网络卡型号。virtio 型号提供最佳性能且 CPU 开销极低。如果您的客户机不支持此驱动,通常最好使用 e1000。 -
mtu=<integer> (1 - 65520)
mtu=<整数> (1 - 65520) -
Force MTU, for VirtIO only. Set to 1 to use the bridge MTU
仅针对 VirtIO 强制设置 MTU。设置为 1 表示使用桥接的 MTU -
queues=<integer> (0 - 64)
queues=<整数> (0 - 64) -
Number of packet queues to be used on the device.
设备上使用的数据包队列数量。 -
rate=<number> (0 - N)
rate=<数字> (0 - N) -
Rate limit in mbps (megabytes per second) as floating point number.
速率限制,单位为 mbps(兆字节每秒),以浮点数表示。 -
tag=<integer> (1 - 4094)
tag=<整数> (1 - 4094) -
VLAN tag to apply to packets on this interface.
应用于此接口数据包的 VLAN 标签。 - trunks=<vlanid[;vlanid...]>
-
VLAN trunks to pass through this interface.
通过此接口传递的 VLAN 中继。
-
numa: <boolean> (default = 0)
numa: <boolean>(默认值 = 0) -
Enable/disable NUMA. 启用/禁用 NUMA。
-
numa[n]: cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>]
numa[n]:cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<数字>] [,policy=<preferred|bind|interleave>] -
NUMA topology. NUMA 拓扑结构。
- cpus=<id[-id];...>
-
CPUs accessing this NUMA node.
访问此 NUMA 节点的 CPU。 - hostnodes=<id[-id];...>
-
Host NUMA nodes to use.
要使用的主机 NUMA 节点。 - memory=<number>
-
Amount of memory this NUMA node provides.
此 NUMA 节点提供的内存量。 - policy=<bind | interleave | preferred>
-
NUMA allocation policy. NUMA 分配策略。
-
onboot: <boolean> (default = 0)
onboot: <boolean>(默认 = 0) -
Specifies whether a VM will be started during system bootup.
指定虚拟机是否在系统启动时自动启动。 - ostype: <l24 | l26 | other | solaris | w2k | w2k3 | w2k8 | win10 | win11 | win7 | win8 | wvista | wxp>
-
Specify guest operating system. This is used to enable special optimization/features for specific operating systems:
指定客户机操作系统。此选项用于启用针对特定操作系统的特殊优化/功能:other 其他
unspecified OS 未指定的操作系统
wxp
Microsoft Windows XP
w2k
Microsoft Windows 2000
w2k3
Microsoft Windows 2003
w2k8
Microsoft Windows 2008
wvista
Microsoft Windows Vista 微软 Windows Vista
win7
Microsoft Windows 7 微软 Windows 7
win8
Microsoft Windows 8/2012/2012r2
win10
Microsoft Windows 10/2016/2019
win11
Microsoft Windows 11/2022/2025
l24
Linux 2.4 Kernel Linux 2.4 内核
l26
Linux 2.6 - 6.X Kernel
Linux 2.6 - 6.X 内核solaris
Solaris/OpenSolaris/OpenIndiania kernel
Solaris/OpenSolaris/OpenIndiana 内核 -
parallel[n]: /dev/parport\d+|/dev/usb/lp\d+
parallel[n]:/dev/parport\d+|/dev/usb/lp\d+ -
Map host parallel devices (n is 0 to 2).
映射主机并行设备(n 为 0 到 2)。This option allows direct access to host hardware. So it is no longer possible to migrate such machines - use with special care.
此选项允许直接访问主机硬件。因此,无法迁移此类虚拟机——请谨慎使用。Experimental! User reported problems with this option.
实验性功能!用户报告使用此选项时出现问题。 -
protection: <boolean> (default = 0)
protection: <boolean>(默认值 = 0) -
Sets the protection flag of the VM. This will disable the remove VM and remove disk operations.
设置虚拟机的保护标志。这将禁用删除虚拟机和删除磁盘操作。 -
reboot: <boolean> (default = 1)
reboot: <boolean>(默认值 = 1) -
Allow reboot. If set to 0 the VM exit on reboot.
允许重启。如果设置为 0,虚拟机在重启时将退出。 -
rng0: [source=]</dev/urandom|/dev/random|/dev/hwrng> [,max_bytes=<integer>] [,period=<integer>]
rng0: [source=]</dev/urandom|/dev/random|/dev/hwrng> [,max_bytes=<整数>] [,period=<整数>] -
Configure a VirtIO-based Random Number Generator.
配置基于 VirtIO 的随机数生成器。-
max_bytes=<integer> (default = 1024)
max_bytes=<整数>(默认值 = 1024) -
Maximum bytes of entropy allowed to get injected into the guest every period milliseconds. Use 0 to disable limiting (potentially dangerous!).
每个周期(毫秒)允许注入到客户机的最大熵字节数。使用 0 表示禁用限制(可能存在危险!)。 -
period=<integer> (default = 1000)
period=<整数>(默认值 = 1000) -
Every period milliseconds the entropy-injection quota is reset, allowing the guest to retrieve another max_bytes of entropy.
每隔 period 毫秒,熵注入配额会被重置,允许客户机再次获取最多 max_bytes 的熵。 - source=</dev/hwrng | /dev/random | /dev/urandom>
-
The file on the host to gather entropy from. Using urandom does not decrease security in any meaningful way, as it’s still seeded from real entropy, and the bytes provided will most likely be mixed with real entropy on the guest as well. /dev/hwrng can be used to pass through a hardware RNG from the host.
主机上用于收集熵的文件。使用 urandom 并不会在任何实质性方面降低安全性,因为它仍然是由真实熵进行种子初始化的,并且提供的字节很可能也会与客户机上的真实熵混合。/dev/hwrng 可用于从主机传递硬件随机数生成器。
-
max_bytes=<integer> (default = 1024)
-
sata[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
sata[n]:[文件=]<卷> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<枚举>] [,wwn=<wwn>] -
Use volume as SATA hard disk or CD-ROM (n is 0 to 5).
将卷用作 SATA 硬盘或光驱(n 为 0 到 5)。- aio=<io_uring | native | threads>
-
AIO type to use.
要使用的 AIO 类型。 - backup=<boolean>
-
Whether the drive should be included when making backups.
是否在备份时包含该驱动器。 - bps=<bps>
-
Maximum r/w speed in bytes per second.
最大读写速度,单位为字节每秒。 -
bps_max_length=<seconds>
bps_max_length=<秒> -
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间,单位为秒。 - bps_rd=<bps>
-
Maximum read speed in bytes per second.
最大读取速度,单位为字节每秒。 -
bps_rd_max_length=<seconds>
bps_rd_max_length=<秒> -
Maximum length of read I/O bursts in seconds.
读取 I/O 突发的最大持续时间,单位为秒。 - bps_wr=<bps> bps_wr=<字节每秒>
-
Maximum write speed in bytes per second.
最大写入速度,单位为字节每秒。 -
bps_wr_max_length=<seconds>
bps_wr_max_length=<秒> -
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大时长,单位为秒。 - cache=<directsync | none | unsafe | writeback | writethrough>
-
The drive’s cache mode
驱动器的缓存模式 - cyls=<integer> cyls=<整数>
-
Force the drive’s physical geometry to have a specific cylinder count.
强制驱动器的物理几何结构具有特定的柱面数。 - detect_zeroes=<boolean> detect_zeroes=<布尔值>
-
Controls whether to detect and try to optimize writes of zeroes.
控制是否检测并尝试优化零值写入。 - discard=<ignore | on>
-
Controls whether to pass discard/trim requests to the underlying storage.
控制是否将 discard/trim 请求传递到底层存储。 - file=<volume>
-
The drive’s backing volume.
驱动器的后备卷。 - format=<cloop | qcow | qcow2 | qed | raw | vmdk>
-
The drive’s backing file’s data format.
驱动器的后备文件的数据格式。 - heads=<integer>
-
Force the drive’s physical geometry to have a specific head count.
强制驱动器的物理几何结构具有特定的磁头数量。 - iops=<iops>
-
Maximum r/w I/O in operations per second.
最大读/写 I/O 操作次数(每秒)。 - iops_max=<iops>
-
Maximum unthrottled r/w I/O pool in operations per second.
最大不受限制的读/写 I/O 池操作次数(每秒)。 -
iops_max_length=<seconds>
iops_max_length=<秒> -
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间,单位为秒。 - iops_rd=<iops>
-
Maximum read I/O in operations per second.
最大读取 I/O 操作次数,单位为每秒操作数。 - iops_rd_max=<iops>
-
Maximum unthrottled read I/O pool in operations per second.
最大非限制读 I/O 池,单位为每秒操作次数。 - iops_rd_max_length=<seconds>
-
Maximum length of read I/O bursts in seconds.
读 I/O 突发的最大持续时间,单位为秒。 - iops_wr=<iops>
-
Maximum write I/O in operations per second.
最大写入 I/O 操作次数(每秒)。 - iops_wr_max=<iops>
-
Maximum unthrottled write I/O pool in operations per second.
最大非限制写入 I/O 池操作次数(每秒)。 -
iops_wr_max_length=<seconds>
iops_wr_max_length=<秒> -
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大持续时间,单位为秒。 - mbps=<mbps> mbps=<兆字节每秒>
-
Maximum r/w speed in megabytes per second.
最大读/写速度,单位为兆字节每秒。 - mbps_max=<mbps>
-
Maximum unthrottled r/w pool in megabytes per second.
最大不受限制的读写池速度,单位为兆字节每秒。 - mbps_rd=<mbps>
-
Maximum read speed in megabytes per second.
最大读取速度,单位为兆字节每秒。 - mbps_rd_max=<mbps>
-
Maximum unthrottled read pool in megabytes per second.
最大不受限制的读取池速度,单位为兆字节每秒。 - mbps_wr=<mbps>
-
Maximum write speed in megabytes per second.
最大写入速度,单位为兆字节每秒。 - mbps_wr_max=<mbps>
-
Maximum unthrottled write pool in megabytes per second.
最大不受限写入池,单位为兆字节每秒。 -
media=<cdrom | disk> (default = disk)
media=<cdrom | disk>(默认 = disk) -
The drive’s media type.
驱动器的介质类型。 -
replicate=<boolean> (default = 1)
replicate=<boolean>(默认值 = 1) -
Whether the drive should considered for replication jobs.
是否将该驱动器纳入复制任务考虑。 - rerror=<ignore | report | stop>
-
Read error action. 读取错误处理方式。
- secs=<integer> secs=<整数>
-
Force the drive’s physical geometry to have a specific sector count.
强制驱动器的物理几何结构具有特定的扇区数。 - serial=<serial> serial=<序列号>
-
The drive’s reported serial number, url-encoded, up to 20 bytes long.
驱动器报告的序列号,经过 URL 编码,最长 20 字节。 -
shared=<boolean> (default = 0)
shared=<boolean>(默认值 = 0) -
Mark this locally-managed volume as available on all nodes.
将此本地管理的卷标记为在所有节点上可用。This option does not share the volume automatically, it assumes it is shared already!
此选项不会自动共享卷,它假设卷已经被共享! - size=<DiskSize>
-
Disk size. This is purely informational and has no effect.
磁盘大小。此信息仅供参考,无任何影响。 - snapshot=<boolean>
-
Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown.
控制 qemu 的快照模式功能。如果启用,对磁盘所做的更改是临时的,虚拟机关闭时将被丢弃。 - ssd=<boolean>
-
Whether to expose this drive as an SSD, rather than a rotational hard disk.
是否将此驱动器作为 SSD 暴露,而不是旋转硬盘。 - trans=<auto | lba | none>
-
Force disk geometry bios translation mode.
强制磁盘几何结构的 BIOS 转换模式。 - werror=<enospc | ignore | report | stop>
-
Write error action. 写入错误操作。
- wwn=<wwn>
-
The drive’s worldwide name, encoded as 16 bytes hex string, prefixed by 0x.
驱动器的全球唯一名称,编码为 16 字节的十六进制字符串,前缀为 0x。
-
scsi[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,product=<product>] [,queues=<integer>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,vendor=<vendor>] [,werror=<enum>] [,wwn=<wwn>]
scsi[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,product=<产品>] [,queues=<整数>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,vendor=<厂商>] [,werror=<枚举>] [,wwn=<wwn>] -
Use volume as SCSI hard disk or CD-ROM (n is 0 to 30).
将卷用作 SCSI 硬盘或 CD-ROM(n 的范围是 0 到 30)。- aio=<io_uring | native | threads>
-
AIO type to use.
要使用的 AIO 类型。 - backup=<boolean>
-
Whether the drive should be included when making backups.
是否应在备份时包含该驱动器。 - bps=<bps>
-
Maximum r/w speed in bytes per second.
最大读/写速度,单位为字节每秒。 -
bps_max_length=<seconds>
bps_max_length=<秒> -
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间(秒)。 - bps_rd=<bps>
-
Maximum read speed in bytes per second.
最大读取速度,单位为字节每秒。 - bps_rd_max_length=<seconds>
-
Maximum length of read I/O bursts in seconds.
读取 I/O 突发的最大持续时间,单位为秒。 - bps_wr=<bps>
-
Maximum write speed in bytes per second.
最大写入速度,单位为字节每秒。 - bps_wr_max_length=<seconds>
-
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大持续时间(秒)。 - cache=<directsync | none | unsafe | writeback | writethrough>
-
The drive’s cache mode
驱动器的缓存模式 - cyls=<integer> cyls=<整数>
-
Force the drive’s physical geometry to have a specific cylinder count.
强制驱动器的物理几何结构具有特定的柱面数。 - detect_zeroes=<boolean>
-
Controls whether to detect and try to optimize writes of zeroes.
控制是否检测并尝试优化零值写入。 - discard=<ignore | on>
-
Controls whether to pass discard/trim requests to the underlying storage.
控制是否将丢弃/修剪请求传递到底层存储。 - file=<volume>
-
The drive’s backing volume.
驱动器的后端卷。 - format=<cloop | qcow | qcow2 | qed | raw | vmdk>
-
The drive’s backing file’s data format.
驱动器的后备文件的数据格式。 - heads=<integer>
-
Force the drive’s physical geometry to have a specific head count.
强制驱动器的物理几何结构具有特定的磁头数量。 - iops=<iops>
-
Maximum r/w I/O in operations per second.
最大读写 I/O 操作次数(每秒)。 - iops_max=<iops>
-
Maximum unthrottled r/w I/O pool in operations per second.
最大不受限制的读写 I/O 池操作次数(每秒)。 - iops_max_length=<seconds>
-
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间(秒)。 - iops_rd=<iops>
-
Maximum read I/O in operations per second.
最大读取 I/O 操作次数(每秒)。 - iops_rd_max=<iops>
-
Maximum unthrottled read I/O pool in operations per second.
最大无节流读取 I/O 池的每秒操作次数。 -
iops_rd_max_length=<seconds>
iops_rd_max_length=<秒数> -
Maximum length of read I/O bursts in seconds.
读取 I/O 突发的最大持续时间(秒)。 - iops_wr=<iops>
-
Maximum write I/O in operations per second.
最大写入 I/O 操作次数(每秒)。 - iops_wr_max=<iops>
-
Maximum unthrottled write I/O pool in operations per second.
最大非限制写入 I/O 池操作次数(每秒)。 - iops_wr_max_length=<seconds>
-
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大持续时间(秒)。 - iothread=<boolean>
-
Whether to use iothreads for this drive
是否为此驱动器使用 iothreads - mbps=<mbps>
-
Maximum r/w speed in megabytes per second.
最大读写速度,单位为兆字节每秒。 - mbps_max=<mbps>
-
Maximum unthrottled r/w pool in megabytes per second.
最大不受限制的读写池速度,单位为兆字节每秒。 - mbps_rd=<mbps>
-
Maximum read speed in megabytes per second.
最大读取速度,单位为兆字节每秒。 - mbps_rd_max=<mbps>
-
Maximum unthrottled read pool in megabytes per second.
最大非限制写入速度,单位为兆字节每秒。 - mbps_wr=<mbps>
-
Maximum write speed in megabytes per second.
最大写入速度,单位为兆字节每秒。 - mbps_wr_max=<mbps>
-
Maximum unthrottled write pool in megabytes per second.
最大无节流写入池,单位为兆字节每秒。 -
media=<cdrom | disk> (default = disk)
media=<cdrom | disk>(默认 = disk) -
The drive’s media type.
驱动器的介质类型。 - product=<product>
-
The drive’s product name, up to 16 bytes long.
驱动器的产品名称,最长 16 字节。 - queues=<integer> (2 - N)
-
Number of queues. 队列数量。
-
replicate=<boolean> (default = 1)
replicate=<boolean>(默认 = 1) -
Whether the drive should considered for replication jobs.
是否将该驱动器纳入复制任务考虑。 -
rerror=<ignore | report | stop>
rerror=<忽略 | 报告 | 停止> -
Read error action. 读取错误操作。
- ro=<boolean>
-
Whether the drive is read-only.
驱动器是否为只读。 -
scsiblock=<boolean> (default = 0)
scsiblock=<boolean>(默认值 = 0) -
whether to use scsi-block for full passthrough of host block device
是否使用 scsi-block 进行主机块设备的完全直通can lead to I/O errors in combination with low memory or high memory fragmentation on host
在主机内存较低或内存碎片较高的情况下,可能导致 I/O 错误 - secs=<integer> secs=<整数>
-
Force the drive’s physical geometry to have a specific sector count.
强制驱动器的物理几何结构具有特定的扇区数。 - serial=<serial>
-
The drive’s reported serial number, url-encoded, up to 20 bytes long.
驱动器报告的序列号,经过 URL 编码,最长 20 字节。 -
shared=<boolean> (default = 0)
shared=<boolean>(默认值 = 0) -
Mark this locally-managed volume as available on all nodes.
将此本地管理的卷标记为在所有节点上可用。This option does not share the volume automatically, it assumes it is shared already!
此选项不会自动共享卷,它假设卷已经被共享! - size=<DiskSize>
-
Disk size. This is purely informational and has no effect.
磁盘大小。此信息仅供参考,不会产生任何影响。 - snapshot=<boolean>
-
Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown.
控制 qemu 的快照模式功能。如果激活,对磁盘所做的更改是临时的,虚拟机关闭时将被丢弃。 - ssd=<boolean>
-
Whether to expose this drive as an SSD, rather than a rotational hard disk.
是否将此驱动器作为 SSD 暴露,而不是旋转硬盘。 - trans=<auto | lba | none>
-
Force disk geometry bios translation mode.
强制磁盘几何 BIOS 翻译模式。 - vendor=<vendor>
-
The drive’s vendor name, up to 8 bytes long.
驱动器的厂商名称,最长 8 个字节。 - werror=<enospc | ignore | report | stop>
-
Write error action. 写入错误操作。
- wwn=<wwn>
-
The drive’s worldwide name, encoded as 16 bytes hex string, prefixed by 0x.
驱动器的全球唯一名称,编码为 16 字节的十六进制字符串,前缀为 0x。
-
scsihw: <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single> (default = lsi)
scsihw: <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single>(默认 = lsi) -
SCSI controller model SCSI 控制器型号
- searchdomain: <string> searchdomain: <字符串>
-
cloud-init: Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 搜索域。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 -
serial[n]: (/dev/.+|socket)
serial[n]:(/dev/.+|socket) -
Create a serial device inside the VM (n is 0 to 3), and pass through a host serial device (i.e. /dev/ttyS0), or create a unix socket on the host side (use qm terminal to open a terminal connection).
在虚拟机内创建一个串行设备(n 为 0 到 3),并直通主机串行设备(例如 /dev/ttyS0),或者在主机端创建一个 unix 套接字(使用 qm terminal 打开终端连接)。If you pass through a host serial device, it is no longer possible to migrate such machines - use with special care.
如果直通主机串行设备,则无法迁移此类虚拟机——请谨慎使用。Experimental! User reported problems with this option.
实验性功能!用户报告此选项存在问题。 -
shares: <integer> (0 - 50000) (default = 1000)
shares: <整数>(0 - 50000)(默认值 = 1000) -
Amount of memory shares for auto-ballooning. The larger the number is, the more memory this VM gets. Number is relative to weights of all other running VMs. Using zero disables auto-ballooning. Auto-ballooning is done by pvestatd.
自动气球内存份额的数量。数字越大,该虚拟机获得的内存越多。该数字相对于所有其他正在运行的虚拟机的权重。使用零将禁用自动气球。自动气球由 pvestatd 执行。 -
smbios1: [base64=<1|0>] [,family=<Base64 encoded string>] [,manufacturer=<Base64 encoded string>] [,product=<Base64 encoded string>] [,serial=<Base64 encoded string>] [,sku=<Base64 encoded string>] [,uuid=<UUID>] [,version=<Base64 encoded string>]
smbios1: [base64=<1|0>] [,family=<Base64 编码字符串>] [,manufacturer=<Base64 编码字符串>] [,product=<Base64 编码字符串>] [,serial=<Base64 编码字符串>] [,sku=<Base64 编码字符串>] [,uuid=<UUID>] [,version=<Base64 编码字符串>] -
Specify SMBIOS type 1 fields.
指定 SMBIOS 类型 1 字段。- base64=<boolean> base64=<布尔值>
-
Flag to indicate that the SMBIOS values are base64 encoded
标志,指示 SMBIOS 值是 base64 编码的 -
family=<Base64 encoded string>
family=<Base64 编码的字符串> -
Set SMBIOS1 family string.
设置 SMBIOS1 的 family 字符串。 -
manufacturer=<Base64 encoded string>
manufacturer=<Base64 编码的字符串> -
Set SMBIOS1 manufacturer.
设置 SMBIOS1 制造商。 -
product=<Base64 encoded string>
product=<Base64 编码字符串> -
Set SMBIOS1 product ID.
设置 SMBIOS1 产品 ID。 -
serial=<Base64 encoded string>
serial=<Base64 编码字符串> -
Set SMBIOS1 serial number.
设置 SMBIOS1 序列号。 -
sku=<Base64 encoded string>
sku=<Base64 编码字符串> -
Set SMBIOS1 SKU string.
设置 SMBIOS1 SKU 字符串。 - uuid=<UUID>
-
Set SMBIOS1 UUID. 设置 SMBIOS1 UUID。
-
version=<Base64 encoded string>
version=<Base64 编码字符串> -
Set SMBIOS1 version. 设置 SMBIOS1 版本。
-
smp: <integer> (1 - N) (default = 1)
smp: <整数> (1 - N) (默认 = 1) -
The number of CPUs. Please use option -sockets instead.
CPU 数量。请改用选项 -sockets。 -
sockets: <integer> (1 - N) (default = 1)
sockets: <整数> (1 - N) (默认 = 1) -
The number of CPU sockets.
CPU 插槽数量。 - spice_enhancements: [foldersharing=<1|0>] [,videostreaming=<off|all|filter>]
-
Configure additional enhancements for SPICE.
配置 SPICE 的额外增强功能。-
foldersharing=<boolean> (default = 0)
foldersharing=<boolean>(默认值 = 0) -
Enable folder sharing via SPICE. Needs Spice-WebDAV daemon installed in the VM.
通过 SPICE 启用文件夹共享。需要在虚拟机中安装 Spice-WebDAV 守护进程。 -
videostreaming=<all | filter | off> (default = off)
videostreaming=<all | filter | off>(默认值 = off) -
Enable video streaming. Uses compression for detected video streams.
启用视频流。对检测到的视频流使用压缩。
-
foldersharing=<boolean> (default = 0)
- sshkeys: <string> sshkeys: <字符串>
-
cloud-init: Setup public SSH keys (one key per line, OpenSSH format).
cloud-init:设置公共 SSH 密钥(每行一个密钥,OpenSSH 格式)。 -
startdate: (now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS) (default = now)
startdate:(now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS)(默认 = now) -
Set the initial date of the real time clock. Valid format for date are:'now' or 2006-06-17T16:01:21 or 2006-06-17.
设置实时时钟的初始日期。有效的日期格式为:“now” 或 2006-06-17T16:01:21 或 2006-06-17。 - startup: `[[order=]\d+] [,up=\d+] [,down=\d+] `
-
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。order 是一个非负数,定义了整体的启动顺序。关闭时按相反顺序进行。此外,你可以设置 up 或 down 延迟(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的时间。 -
tablet: <boolean> (default = 1)
tablet: <boolean>(默认 = 1) -
Enable/disable the USB tablet device. This device is usually needed to allow absolute mouse positioning with VNC. Else the mouse runs out of sync with normal VNC clients. If you’re running lots of console-only guests on one host, you may consider disabling this to save some context switches. This is turned off by default if you use spice (qm set <vmid> --vga qxl).
启用/禁用 USB 平板设备。通常需要此设备以允许通过 VNC 实现绝对鼠标定位。否则,鼠标在普通 VNC 客户端中会不同步。如果您在一台主机上运行大量仅控制台的虚拟机,您可以考虑禁用此功能以节省一些上下文切换。如果您使用 spice(qm set <vmid> --vga qxl),则默认关闭此功能。 - tags: <string> 标签:<string>
-
Tags of the VM. This is only meta information.
虚拟机的标签。这只是元信息。 -
tdf: <boolean> (default = 0)
tdf:<boolean>(默认值 = 0) -
Enable/disable time drift fix.
启用/禁用时间漂移修正。 -
template: <boolean> (default = 0)
template: <boolean>(默认 = 0) -
Enable/disable Template.
启用/禁用模板。 - tpmstate0: [file=]<volume> [,size=<DiskSize>] [,version=<v1.2|v2.0>]
-
Configure a Disk for storing TPM state. The format is fixed to raw.
配置用于存储 TPM 状态的磁盘。格式固定为 raw。- file=<volume>
-
The drive’s backing volume.
驱动器的后备卷。 - size=<DiskSize>
-
Disk size. This is purely informational and has no effect.
磁盘大小。此信息仅供参考,无任何影响。 -
version=<v1.2 | v2.0> (default = v1.2)
version=<v1.2 | v2.0>(默认 = v1.2) -
The TPM interface version. v2.0 is newer and should be preferred. Note that this cannot be changed later on.
TPM 接口版本。v2.0 是较新的版本,建议优先使用。请注意,之后无法更改此设置。
- unused[n]: [file=]<volume>
-
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。这在内部使用,不应手动修改。- file=<volume>
-
The drive’s backing volume.
驱动器的后备卷。
- usb[n]: [[host=]<HOSTUSBDEVICE|spice>] [,mapping=<mapping-id>] [,usb3=<1|0>]
-
Configure an USB device (n is 0 to 4, for machine version >= 7.1 and ostype l26 or windows > 7, n can be up to 14).
配置一个 USB 设备(n 的取值范围是 0 到 4,对于机器版本>=7.1 且操作系统类型为 l26 或 Windows 版本大于 7,n 的取值可达 14)。- host=<HOSTUSBDEVICE|spice>
-
The Host USB device or port or the value spice. HOSTUSBDEVICE syntax is:
主机 USB 设备或端口,或者值为 spice。HOSTUSBDEVICE 的语法是:'bus-port(.port)*' (decimal numbers) or 'vendor_id:product_id' (hexadecimal numbers) or 'spice'
You can use the lsusb -t command to list existing usb devices.
你可以使用 lsusb -t 命令列出现有的 USB 设备。This option allows direct access to host hardware. So it is no longer possible to migrate such machines - use with special care.
此选项允许直接访问主机硬件。因此,此类机器不再支持迁移——请谨慎使用。The value spice can be used to add a usb redirection devices for spice.
值 spice 可用于为 spice 添加 USB 重定向设备。Either this or the mapping key must be set.
必须设置此项或映射键之一。 - mapping=<mapping-id>
-
The ID of a cluster wide mapping. Either this or the default-key host must be set.
集群范围映射的 ID。必须设置此项或 default-key 主机。 -
usb3=<boolean> (default = 0)
usb3=<布尔值>(默认 = 0) -
Specifies whether if given host option is a USB3 device or port. For modern guests (machine version >= 7.1 and ostype l26 and windows > 7), this flag is irrelevant (all devices are plugged into a xhci controller).
指定给定主机选项是否为 USB3 设备或端口。对于现代客户机(机器版本 >= 7.1 且操作系统类型为 l26 及 Windows 版本高于 7),此标志无关紧要(所有设备均连接到 xhci 控制器)。
-
vcpus: <integer> (1 - N) (default = 0)
vcpus: <整数>(1 - N)(默认 = 0) -
Number of hotplugged vcpus.
热插拔的虚拟 CPU 数量。 -
vga: [[type=]<enum>] [,clipboard=<vnc>] [,memory=<integer>]
vga: [[type=]<枚举>] [,clipboard=<vnc>] [,memory=<整数>] -
Configure the VGA Hardware. If you want to use high resolution modes (>= 1280x1024x16) you may need to increase the vga memory option. Since QEMU 2.9 the default VGA display type is std for all OS types besides some Windows versions (XP and older) which use cirrus. The qxl option enables the SPICE display server. For win* OS you can select how many independent displays you want, Linux guests can add displays them self. You can also run without any graphic card, using a serial device as terminal.
配置 VGA 硬件。如果您想使用高分辨率模式(>= 1280x1024x16),可能需要增加 vga 内存选项。自 QEMU 2.9 起,除某些 Windows 版本(XP 及更早版本)使用 cirrus 外,所有操作系统类型的默认 VGA 显示类型为 std。qxl 选项启用 SPICE 显示服务器。对于 win*操作系统,您可以选择所需的独立显示器数量,Linux 客户机可以自行添加显示器。您也可以在没有任何显卡的情况下运行,使用串行设备作为终端。- clipboard=<vnc>
-
Enable a specific clipboard. If not set, depending on the display type the SPICE one will be added. Migration with VNC clipboard is not yet supported!
启用特定的剪贴板。如果未设置,根据显示类型将添加 SPICE 剪贴板。迁移时尚不支持 VNC 剪贴板! -
memory=<integer> (4 - 512)
memory=<整数>(4 - 512) -
Sets the VGA memory (in MiB). Has no effect with serial display.
设置 VGA 内存(以 MiB 为单位)。对串行显示无效。 -
type=<cirrus | none | qxl | qxl2 | qxl3 | qxl4 | serial0 | serial1 | serial2 | serial3 | std | virtio | virtio-gl | vmware> (default = std)
type=<cirrus | none | qxl | qxl2 | qxl3 | qxl4 | serial0 | serial1 | serial2 | serial3 | std | virtio | virtio-gl | vmware>(默认 = std) -
Select the VGA type. Using type cirrus is not recommended.
选择 VGA 类型。不推荐使用 cirrus 类型。
- virtio[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>]
-
Use volume as VIRTIO hard disk (n is 0 to 15).
使用卷作为 VIRTIO 硬盘(n 为 0 到 15)。- aio=<io_uring | native | threads>
-
AIO type to use.
要使用的 AIO 类型。 - backup=<boolean>
-
Whether the drive should be included when making backups.
在进行备份时,是否应包含该驱动器。 - bps=<bps>
-
Maximum r/w speed in bytes per second.
最大读写速度,单位为字节每秒。 -
bps_max_length=<seconds>
bps_max_length=<秒数> -
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间,单位为秒。 - bps_rd=<bps>
-
Maximum read speed in bytes per second.
最大读取速度,单位为字节每秒。 -
bps_rd_max_length=<seconds>
bps_rd_max_length=<秒数> -
Maximum length of read I/O bursts in seconds.
读取 I/O 突发的最大持续时间,单位为秒。 - bps_wr=<bps>
-
Maximum write speed in bytes per second.
最大写入速度,单位为字节每秒。 -
bps_wr_max_length=<seconds>
bps_wr_max_length=<秒数> -
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大持续时间,单位为秒。 - cache=<directsync | none | unsafe | writeback | writethrough>
-
The drive’s cache mode
驱动器的缓存模式 - cyls=<integer> cyls=<整数>
-
Force the drive’s physical geometry to have a specific cylinder count.
强制驱动器的物理几何结构具有特定的柱面数。 - detect_zeroes=<boolean> detect_zeroes=<布尔值>
-
Controls whether to detect and try to optimize writes of zeroes.
控制是否检测并尝试优化写入零值的操作。 - discard=<ignore | on>
-
Controls whether to pass discard/trim requests to the underlying storage.
控制是否将丢弃/修剪请求传递到底层存储。 - file=<volume>
-
The drive’s backing volume.
驱动器的后备卷。 - format=<cloop | qcow | qcow2 | qed | raw | vmdk>
-
The drive’s backing file’s data format.
驱动器后备文件的数据格式。 - heads=<integer> heads=<整数>
-
Force the drive’s physical geometry to have a specific head count.
强制驱动器的物理几何结构具有特定的磁头数量。 - iops=<iops>
-
Maximum r/w I/O in operations per second.
最大读/写 I/O 操作次数(每秒)。 - iops_max=<iops>
-
Maximum unthrottled r/w I/O pool in operations per second.
最大不受限制的读/写 I/O 池操作次数每秒。 -
iops_max_length=<seconds>
iops_max_length=<秒数> -
Maximum length of I/O bursts in seconds.
I/O 突发的最大持续时间,单位为秒。 - iops_rd=<iops>
-
Maximum read I/O in operations per second.
最大读取 I/O 操作次数(每秒)。 - iops_rd_max=<iops>
-
Maximum unthrottled read I/O pool in operations per second.
最大非限制读取 I/O 池操作次数(每秒)。 - iops_rd_max_length=<seconds>
-
Maximum length of read I/O bursts in seconds.
读取 I/O 突发的最大持续时间,单位为秒。 - iops_wr=<iops>
-
Maximum write I/O in operations per second.
最大写入 I/O 操作次数,单位为每秒操作数。 - iops_wr_max=<iops>
-
Maximum unthrottled write I/O pool in operations per second.
最大不受限制的写入 I/O 池操作次数(每秒)。 -
iops_wr_max_length=<seconds>
iops_wr_max_length=<秒数> -
Maximum length of write I/O bursts in seconds.
写入 I/O 突发的最大持续时间(秒)。 - iothread=<boolean> iothread=<布尔值>
-
Whether to use iothreads for this drive
是否为此驱动器使用 iothreads - mbps=<mbps>
-
Maximum r/w speed in megabytes per second.
最大读/写速度,单位为兆字节每秒。 - mbps_max=<mbps>
-
Maximum unthrottled r/w pool in megabytes per second.
最大不受限制的读写池速度,单位为兆字节每秒。 - mbps_rd=<mbps>
-
Maximum read speed in megabytes per second.
最大读取速度,单位为兆字节每秒。 - mbps_rd_max=<mbps>
-
Maximum unthrottled read pool in megabytes per second.
最大非限速读取池,单位为兆字节每秒。 - mbps_wr=<mbps>
-
Maximum write speed in megabytes per second.
最大写入速度,单位为兆字节每秒。 - mbps_wr_max=<mbps>
-
Maximum unthrottled write pool in megabytes per second.
最大不受限制的写入池速度,单位为兆字节每秒。 -
media=<cdrom | disk> (default = disk)
media=<cdrom | disk>(默认 = disk) -
The drive’s media type.
驱动器的介质类型。 -
replicate=<boolean> (default = 1)
replicate=<boolean>(默认 = 1) -
Whether the drive should considered for replication jobs.
是否应将该驱动器考虑用于复制任务。 - rerror=<ignore | report | stop>
-
Read error action. 读取错误操作。
- ro=<boolean>
-
Whether the drive is read-only.
驱动器是否为只读。 - secs=<integer> secs=<整数>
-
Force the drive’s physical geometry to have a specific sector count.
强制驱动器的物理几何结构具有特定的扇区数。 - serial=<serial> serial=<序列号>
-
The drive’s reported serial number, url-encoded, up to 20 bytes long.
驱动器报告的序列号,经过 URL 编码,最长 20 字节。 -
shared=<boolean> (default = 0)
shared=<boolean>(默认值 = 0) -
Mark this locally-managed volume as available on all nodes.
将此本地管理的卷标记为在所有节点上可用。This option does not share the volume automatically, it assumes it is shared already!
此选项不会自动共享卷,它假设卷已经被共享! - size=<DiskSize>
-
Disk size. This is purely informational and has no effect.
磁盘大小。仅供参考,不会产生任何影响。 - snapshot=<boolean>
-
Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown.
控制 qemu 的快照模式功能。如果启用,对磁盘所做的更改是临时的,虚拟机关闭时这些更改将被丢弃。 - trans=<auto | lba | none>
-
Force disk geometry bios translation mode.
强制磁盘几何 BIOS 翻译模式。 - werror=<enospc | ignore | report | stop>
-
Write error action. 写入错误处理方式。
-
virtiofs[n]: [dirid=]<mapping-id> [,cache=<enum>] [,direct-io=<1|0>] [,expose-acl=<1|0>] [,expose-xattr=<1|0>]
virtiofs[n]: [dirid=]<映射 ID> [,cache=<枚举>] [,direct-io=<1|0>] [,expose-acl=<1|0>] [,expose-xattr=<1|0>] -
Configuration for sharing a directory between host and guest using Virtio-fs.
使用 Virtio-fs 在主机和客户机之间共享目录的配置。-
cache=<always | auto | metadata | never> (default = auto)
cache=<always | auto | metadata | never>(默认 = auto) -
The caching policy the file system should use (auto, always, metadata, never).
文件系统应使用的缓存策略(auto、always、metadata、never)。 -
direct-io=<boolean> (default = 0)
direct-io=<布尔值>(默认 = 0) -
Honor the O_DIRECT flag passed down by guest applications.
遵守由客户应用程序传递下来的 O_DIRECT 标志。 - dirid=<mapping-id> dirid=<映射标识>
-
Mapping identifier of the directory mapping to be shared with the guest. Also used as a mount tag inside the VM.
要与客户机共享的目录映射的映射标识。也用作虚拟机内的挂载标签。 -
expose-acl=<boolean> (default = 0)
expose-acl=<boolean>(默认值 = 0) -
Enable support for POSIX ACLs (enabled ACL implies xattr) for this mount.
为此挂载启用对 POSIX ACL 的支持(启用 ACL 意味着启用 xattr)。 -
expose-xattr=<boolean> (default = 0)
expose-xattr=<boolean>(默认值 = 0) -
Enable support for extended attributes for this mount.
为此挂载启用对扩展属性的支持。
-
cache=<always | auto | metadata | never> (default = auto)
-
vmgenid: <UUID> (default = 1 (autogenerated))
vmgenid: <UUID>(默认 = 1(自动生成)) -
The VM generation ID (vmgenid) device exposes a 128-bit integer value identifier to the guest OS. This allows to notify the guest operating system when the virtual machine is executed with a different configuration (e.g. snapshot execution or creation from a template). The guest operating system notices the change, and is then able to react as appropriate by marking its copies of distributed databases as dirty, re-initializing its random number generator, etc. Note that auto-creation only works when done through API/CLI create or update methods, but not when manually editing the config file.
VM 生成 ID(vmgenid)设备向客户操作系统暴露一个 128 位整数值标识符。这允许在虚拟机以不同配置执行时(例如快照执行或从模板创建)通知客户操作系统。客户操作系统检测到变化后,能够通过将其分布式数据库的副本标记为脏、重新初始化其随机数生成器等方式做出适当反应。请注意,自动创建仅在通过 API/CLI 的创建或更新方法时有效,手动编辑配置文件时无效。 -
vmstatestorage: <storage ID>
vmstatestorage: <存储 ID> -
Default storage for VM state volumes/files.
虚拟机状态卷/文件的默认存储。 - watchdog: [[model=]<i6300esb|ib700>] [,action=<enum>]
-
Create a virtual hardware watchdog device. Once enabled (by a guest action), the watchdog must be periodically polled by an agent inside the guest or else the watchdog will reset the guest (or execute the respective action specified)
创建一个虚拟硬件看门狗设备。一旦启用(由客户机操作触发),看门狗必须由客户机内部的代理定期轮询,否则看门狗将重置客户机(或执行指定的相应操作)- action=<debug | none | pause | poweroff | reset | shutdown>
-
The action to perform if after activation the guest fails to poll the watchdog in time.
如果激活后客户机未能及时轮询看门狗,则执行的操作。 -
model=<i6300esb | ib700> (default = i6300esb)
model=<i6300esb | ib700>(默认 = i6300esb) -
Watchdog type to emulate.
要模拟的看门狗类型。
10.15. Locks 10.15. 锁定
Online migrations, snapshots and backups (vzdump) set a lock to prevent
incompatible concurrent actions on the affected VMs. Sometimes you need to
remove such a lock manually (for example after a power failure).
在线迁移、快照和备份(vzdump)会设置锁定,以防止对受影响的虚拟机执行不兼容的并发操作。有时你需要手动移除此类锁定(例如在断电后)。
# qm unlock <vmid>
|
|
Only do that if you are sure the action which set the lock is
no longer running. 只有在确定设置锁的操作已经不再运行时才执行此操作。 |
11. Proxmox Container Toolkit
11. Proxmox 容器工具包
Containers are a lightweight alternative to fully virtualized machines (VMs).
They use the kernel of the host system that they run on, instead of emulating a
full operating system (OS). This means that containers can access resources on
the host system directly.
容器是完全虚拟化机器(VM)的轻量级替代方案。它们使用所运行主机系统的内核,而不是模拟完整的操作系统(OS)。这意味着容器可以直接访问主机系统上的资源。
The runtime costs for containers is low, usually negligible. However, there are
some drawbacks that need be considered:
容器的运行时开销很低,通常可以忽略不计。然而,也有一些需要考虑的缺点:
-
Only Linux distributions can be run in Proxmox Containers. It is not possible to run other operating systems like, for example, FreeBSD or Microsoft Windows inside a container.
只有 Linux 发行版可以在 Proxmox 容器中运行。无法在容器内运行其他操作系统,例如 FreeBSD 或 Microsoft Windows。 -
For security reasons, access to host resources needs to be restricted. Therefore, containers run in their own separate namespaces. Additionally some syscalls (user space requests to the Linux kernel) are not allowed within containers.
出于安全原因,需要限制对主机资源的访问。因此,容器运行在各自独立的命名空间中。此外,某些系统调用(用户空间对 Linux 内核的请求)在容器内是不允许的。
Proxmox VE uses Linux Containers (LXC) as its underlying
container technology. The “Proxmox Container Toolkit” (pct) simplifies the
usage and management of LXC, by providing an interface that abstracts
complex tasks.
Proxmox VE 使用 Linux 容器(LXC)作为其底层容器技术。“Proxmox 容器工具包”(pct)通过提供一个抽象复杂任务的接口,简化了 LXC 的使用和管理。
Containers are tightly integrated with Proxmox VE. This means that they are aware of
the cluster setup, and they can use the same network and storage resources as
virtual machines. You can also use the Proxmox VE firewall, or manage containers
using the HA framework.
容器与 Proxmox VE 紧密集成。这意味着它们了解集群设置,并且可以使用与虚拟机相同的网络和存储资源。您还可以使用 Proxmox VE 防火墙,或通过 HA 框架管理容器。
Our primary goal is to offer an environment that provides the benefits of using a
VM, but without the additional overhead. This means that Proxmox Containers can
be categorized as “System Containers”, rather than “Application Containers”.
我们的主要目标是提供一个环境,既能享受使用虚拟机的好处,又没有额外的开销。这意味着 Proxmox 容器可以被归类为“系统容器”,而不是“应用容器”。
|
|
If you want to run application containers, for example, Docker images, it
is recommended that you run them inside a Proxmox QEMU VM. This will give you
all the advantages of application containerization, while also providing the
benefits that VMs offer, such as strong isolation from the host and the ability
to live-migrate, which otherwise isn’t possible with containers. 如果你想运行应用容器,例如 Docker 镜像,建议你在 Proxmox 的 QEMU 虚拟机内运行它们。这样不仅能获得应用容器化的所有优势,还能享受虚拟机带来的好处,比如与主机的强隔离以及支持实时迁移,而这些是容器无法实现的。 |
11.1. Technology Overview
11.1. 技术概述
-
Integrated into Proxmox VE graphical web user interface (GUI)
集成到 Proxmox VE 图形化网页用户界面(GUI)中 -
Easy to use command-line tool pct
易于使用的命令行工具 pct -
Access via Proxmox VE REST API
通过 Proxmox VE REST API 访问 -
lxcfs to provide containerized /proc file system
lxcfs 提供容器化的 /proc 文件系统 -
Control groups (cgroups) for resource isolation and limitation
控制组(cgroups)用于资源隔离和限制 -
AppArmor and seccomp to improve security
AppArmor 和 seccomp 用于提升安全性 -
Modern Linux kernels 现代 Linux 内核
-
Image based deployment (templates)
基于镜像的部署(模板) -
Uses Proxmox VE storage library
使用 Proxmox VE 存储库 -
Container setup from host (network, DNS, storage, etc.)
从主机设置容器(网络、DNS、存储等)
11.2. Supported Distributions
11.2. 支持的发行版
List of officially supported distributions can be found below.
下面可以找到官方支持的发行版列表。
Templates for the following distributions are available through our
repositories. You can use pveam tool or the
Graphical User Interface to download them.
以下发行版的模板可通过我们的仓库获得。您可以使用 pveam 工具或用户界面下载它们。
11.2.1. Alpine Linux
Alpine Linux is a security-oriented, lightweight Linux distribution based on
musl libc and busybox.
Alpine Linux 是一个以安全为导向的轻量级 Linux 发行版,基于 musl libc 和 busybox。
For currently supported releases see:
有关当前支持的版本,请参见:
11.2.2. Arch Linux
Arch Linux, a lightweight and flexible Linux® distribution that tries to Keep It Simple.
Arch Linux 是一个轻量且灵活的 Linux® 发行版,致力于保持简单。
Arch Linux is using a rolling-release model, see its wiki for more details:
Arch Linux 使用滚动发布模型,详情请参见其维基:
11.2.3. CentOS, Almalinux, Rocky Linux
11.2.3. CentOS、Almalinux、Rocky Linux
CentOS / CentOS Stream
The CentOS Linux distribution is a stable, predictable, manageable and
reproducible platform derived from the sources of Red Hat Enterprise Linux
(RHEL)
CentOS Linux 发行版是一个稳定、可预测、易管理且可复现的平台,源自 Red Hat Enterprise Linux (RHEL) 的源码
For currently supported releases see:
有关当前支持的版本,请参见:
Almalinux
An Open Source, community owned and governed, forever-free enterprise Linux
distribution, focused on long-term stability, providing a robust
production-grade platform. AlmaLinux OS is 1:1 binary compatible with RHEL® and
pre-Stream CentOS.
一个开源的、社区拥有和管理的、永远免费的企业级 Linux 发行版,专注于长期稳定性,提供一个强大的生产级平台。AlmaLinux OS 与 RHEL®及预 Stream CentOS 实现 1:1 二进制兼容。
For currently supported releases see:
有关当前支持的版本,请参见:
Rocky Linux
Rocky Linux is a community enterprise operating system designed to be 100%
bug-for-bug compatible with America’s top enterprise Linux distribution now
that its downstream partner has shifted direction.
Rocky Linux 是一个社区企业操作系统,旨在与美国顶级企业 Linux 发行版实现 100% 的逐错误兼容,因为其下游合作伙伴已改变方向。
For currently supported releases see:
有关当前支持的版本,请参见:
11.2.4. Debian
Debian is a free operating system, developed and maintained by the Debian
project. A free Linux distribution with thousands of applications to meet our
users' needs.
Debian 是一个免费的操作系统,由 Debian 项目开发和维护。它是一个免费的 Linux 发行版,拥有数千个应用程序,以满足用户的需求。
For currently supported releases see:
有关当前支持的版本,请参见:
11.2.5. Devuan
Devuan GNU+Linux is a fork of Debian without systemd that allows users to
reclaim control over their system by avoiding unnecessary entanglements and
ensuring Init Freedom.
Devuan GNU+Linux 是 Debian 的一个分叉版本,不包含 systemd,允许用户通过避免不必要的纠缠并确保初始化自由,重新掌控他们的系统。
For currently supported releases see:
有关当前支持的版本,请参见:
11.2.6. Fedora
Fedora creates an innovative, free, and open source platform for hardware,
clouds, and containers that enables software developers and community members
to build tailored solutions for their users.
Fedora 创建了一个创新的、自由的、开源的平台,适用于硬件、云和容器,使软件开发者和社区成员能够为他们的用户构建定制化的解决方案。
For currently supported releases see:
有关当前支持的版本,请参见:
11.2.7. Gentoo
a highly flexible, source-based Linux distribution.
一个高度灵活的基于源码的 Linux 发行版。
Gentoo is using a rolling-release model.
Gentoo 使用滚动发布模式。
11.2.8. OpenSUSE
The makers' choice for sysadmins, developers and desktop users.
系统管理员、开发者和桌面用户的首选。
For currently supported releases see:
有关当前支持的版本,请参见:
11.3. Container Images 11.3. 容器镜像
Container images, sometimes also referred to as “templates” or
“appliances”, are tar archives which contain everything to run a container.
容器镜像,有时也称为“模板”或“设备”,是包含运行容器所需所有内容的 tar 归档文件。
Proxmox VE itself provides a variety of basic templates for the
most common Linux distributions. They can be
downloaded using the GUI or the pveam (short for Proxmox VE Appliance Manager)
command-line utility. Additionally, TurnKey
Linux container templates are also available to download.
Proxmox VE 本身提供了多种最常见的 Linux 发行版的基础模板。它们可以通过图形界面或 pveam(Proxmox VE Appliance Manager 的缩写)命令行工具下载。此外,还可以下载 TurnKey Linux 容器模板。
The list of available templates is updated daily through the pve-daily-update
timer. You can also trigger an update manually by executing:
可用模板列表通过 pve-daily-update 计时器每天更新。你也可以通过执行以下命令手动触发更新:
# pveam update
To view the list of available images run:
要查看可用镜像列表,请运行:
# pveam available
You can restrict this large list by specifying the section you are
interested in, for example basic system images:
你可以通过指定感兴趣的部分来限制这个庞大的列表,例如基础系统镜像:
列出可用的系统镜像
# pveam available --section system system alpine-3.12-default_20200823_amd64.tar.xz system alpine-3.13-default_20210419_amd64.tar.xz system alpine-3.14-default_20210623_amd64.tar.xz system archlinux-base_20210420-1_amd64.tar.gz system centos-7-default_20190926_amd64.tar.xz system centos-8-default_20201210_amd64.tar.xz system debian-9.0-standard_9.7-1_amd64.tar.gz system debian-10-standard_10.7-1_amd64.tar.gz system devuan-3.0-standard_3.0_amd64.tar.gz system fedora-33-default_20201115_amd64.tar.xz system fedora-34-default_20210427_amd64.tar.xz system gentoo-current-default_20200310_amd64.tar.xz system opensuse-15.2-default_20200824_amd64.tar.xz system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz system ubuntu-20.04-standard_20.04-1_amd64.tar.gz system ubuntu-20.10-standard_20.10-1_amd64.tar.gz system ubuntu-21.04-standard_21.04-1_amd64.tar.gz
Before you can use such a template, you need to download them into one of your
storages. If you’re unsure to which one, you can simply use the local named
storage for that purpose. For clustered installations, it is preferred to use a
shared storage so that all nodes can access those images.
在使用此类模板之前,您需要将它们下载到您的某个存储中。如果不确定下载到哪个存储,可以简单地使用名为 local 的本地存储。对于集群安装,建议使用共享存储,以便所有节点都能访问这些镜像。
# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
You are now ready to create containers using that image, and you can list all
downloaded images on storage local with:
现在您可以使用该镜像创建容器,并且可以使用以下命令列出存储 local 上所有已下载的镜像:
# pveam list local local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
|
|
You can also use the Proxmox VE web interface GUI to download, list and delete
container templates. 您也可以使用 Proxmox VE 的网页界面 GUI 来下载、列出和删除容器模板。 |
pct uses them to create a new container, for example:
pct 使用它们来创建一个新的容器,例如:
# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
The above command shows you the full Proxmox VE volume identifiers. They include the
storage name, and most other Proxmox VE commands can use them. For example you can
delete that image later with:
上述命令向您显示了完整的 Proxmox VE 卷标识符。它们包括存储名称,大多数其他 Proxmox VE 命令也可以使用它们。例如,您可以稍后使用以下命令删除该镜像:
# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
11.4. Container Settings
11.4. 容器设置
11.4.1. General Settings
11.4.1. 通用设置
-
the Node : the physical server on which the container will run
节点:容器将运行的物理服务器 -
the CT ID: a unique number in this Proxmox VE installation used to identify your container
CT ID:在此 Proxmox VE 安装中用于识别容器的唯一编号 -
Hostname: the hostname of the container
主机名:容器的主机名 -
Resource Pool: a logical group of containers and VMs
资源池:容器和虚拟机的逻辑分组 -
Password: the root password of the container
密码:容器的 root 密码 -
SSH Public Key: a public key for connecting to the root account over SSH
SSH 公钥:用于通过 SSH 连接 root 账户的公钥 -
Unprivileged container: this option allows to choose at creation time if you want to create a privileged or unprivileged container.
非特权容器:此选项允许在创建时选择创建特权容器还是非特权容器。
Unprivileged Containers 无特权容器
Unprivileged containers use a new kernel feature called user namespaces.
The root UID 0 inside the container is mapped to an unprivileged user outside
the container. This means that most security issues (container escape, resource
abuse, etc.) in these containers will affect a random unprivileged user, and
would be a generic kernel security bug rather than an LXC issue. The LXC team
thinks unprivileged containers are safe by design.
无特权容器使用了一种称为用户命名空间的新内核特性。容器内的根 UID 0 映射到容器外的一个无特权用户。这意味着这些容器中的大多数安全问题(容器逃逸、资源滥用等)将影响一个随机的无特权用户,并且会成为一个通用的内核安全漏洞,而不是 LXC 的问题。LXC 团队认为无特权容器在设计上是安全的。
This is the default option when creating a new container.
这是创建新容器时的默认选项。
|
|
If the container uses systemd as an init system, please be aware the
systemd version running inside the container should be equal to or greater than
220. 如果容器使用 systemd 作为初始化系统,请注意容器内运行的 systemd 版本应等于或高于 220。 |
Privileged Containers 特权容器
Security in containers is achieved by using mandatory access control AppArmor
restrictions, seccomp filters and Linux kernel namespaces. The LXC team
considers this kind of container as unsafe, and they will not consider new
container escape exploits to be security issues worthy of a CVE and quick fix.
That’s why privileged containers should only be used in trusted environments.
容器的安全性通过使用强制访问控制 AppArmor 限制、seccomp 过滤器和 Linux 内核命名空间来实现。LXC 团队认为这类容器不安全,他们不会将新的容器逃逸漏洞视为值得 CVE 和快速修复的安全问题。这就是为什么特权容器应仅在受信任的环境中使用。
11.4.2. CPU
You can restrict the number of visible CPUs inside the container using the
cores option. This is implemented using the Linux cpuset cgroup
(control group).
A special task inside pvestatd tries to distribute running containers among
available CPUs periodically.
To view the assigned CPUs run the following command:
您可以使用 cores 选项限制容器内可见的 CPU 数量。这是通过 Linux cpuset cgroup(控制组)实现的。pvestatd 中的一个特殊任务会定期尝试将运行中的容器分配到可用的 CPU 上。要查看分配的 CPU,请运行以下命令:
# pct cpusets --------------------- 102: 6 7 105: 2 3 4 5 108: 0 1 ---------------------
Containers use the host kernel directly. All tasks inside a container are
handled by the host CPU scheduler. Proxmox VE uses the Linux CFS (Completely
Fair Scheduler) scheduler by default, which has additional bandwidth
control options.
容器直接使用主机内核。容器内的所有任务都由主机的 CPU 调度器处理。Proxmox VE 默认使用 Linux 的 CFS(完全公平调度器),它具有额外的带宽控制选项。
|
cpulimit:
cpulimit: |
You can use this option to further limit assigned CPU time.
Please note that this is a floating point number, so it is perfectly valid to
assign two cores to a container, but restrict overall CPU consumption to half a
core.
cores: 2 cpulimit: 0.5 |
|
cpuunits:
cpuunits: |
This is a relative weight passed to the kernel scheduler. The
larger the number is, the more CPU time this container gets. Number is relative
to the weights of all the other running containers. The default is 100 (or
1024 if the host uses legacy cgroup v1). You can use this setting to
prioritize some containers.
|
11.4.3. Memory 11.4.3. 内存
|
memory:
|
Limit overall memory usage. This corresponds to the
memory.limit_in_bytes cgroup setting.
|
|
swap:
交换空间: |
Allows the container to use additional swap memory from the host
swap space. This corresponds to the memory.memsw.limit_in_bytes cgroup
setting, which is set to the sum of both value (memory + swap).
|
11.4.4. Mount Points 11.4.4. 挂载点
The root mount point is configured with the rootfs property. You can
configure up to 256 additional mount points. The corresponding options are
called mp0 to mp255. They can contain the following settings:
根挂载点通过 rootfs 属性进行配置。您最多可以配置 256 个额外的挂载点。相应的选项称为 mp0 到 mp255。它们可以包含以下设置:
- rootfs: [volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
-
Use volume as container root. See below for a detailed description of all options.
使用 volume 作为容器根。下面有所有选项的详细说明。 - mp[n]: [volume=]<volume> ,mp=<Path> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
-
Use volume as container mount point. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
使用卷作为容器挂载点。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配一个新卷。- acl=<boolean>
-
Explicitly enable or disable ACL support.
显式启用或禁用 ACL 支持。 - backup=<boolean>
-
Whether to include the mount point in backups (only used for volume mount points).
是否将挂载点包含在备份中(仅用于卷挂载点)。 - mountoptions=<opt[;opt...]>
-
Extra mount options for rootfs/mps.
rootfs/mps 的额外挂载选项。 - mp=<Path> mp=<路径>
-
Path to the mount point as seen from inside the container.
从容器内部看到的挂载点路径。Must not contain any symlinks for security reasons.
出于安全原因,路径中不得包含任何符号链接。 - quota=<boolean>
-
Enable user quotas inside the container (not supported with zfs subvolumes)
启用容器内的用户配额(不支持 zfs 子卷) -
replicate=<boolean> (default = 1)
replicate=<boolean>(默认值 = 1) -
Will include this volume to a storage replica job.
将此卷包含在存储副本任务中。 - ro=<boolean>
-
Read-only mount point 只读挂载点
-
shared=<boolean> (default = 0)
shared=<boolean>(默认值 = 0) -
Mark this non-volume mount point as available on all nodes.
将此非卷挂载点标记为在所有节点上可用。This option does not share the mount point automatically, it assumes it is shared already!
此选项不会自动共享挂载点,它假设挂载点已经被共享! - size=<DiskSize> size=<磁盘大小>
-
Volume size (read only value).
卷大小(只读值)。 - volume=<volume>
-
Volume, device or directory to mount into the container.
要挂载到容器中的卷、设备或目录。
Currently there are three types of mount points: storage backed mount points,
bind mounts, and device mounts.
目前有三种类型的挂载点:存储支持的挂载点、绑定挂载和设备挂载。
典型的容器根文件系统配置
rootfs: thin1:base-100-disk-1,size=8G
Storage Backed Mount Points
存储支持的挂载点
Storage backed mount points are managed by the Proxmox VE storage subsystem and come
in three different flavors:
存储支持的挂载点由 Proxmox VE 存储子系统管理,分为三种类型:
-
Image based: these are raw images containing a single ext4 formatted file system.
基于镜像:这些是包含单个 ext4 格式文件系统的原始镜像。 -
ZFS subvolumes: these are technically bind mounts, but with managed storage, and thus allow resizing and snapshotting.
ZFS 子卷:从技术上讲,这些是绑定挂载,但具有托管存储,因此允许调整大小和快照。 -
Directories: passing size=0 triggers a special case where instead of a raw image a directory is created.
目录:传递 size=0 会触发一个特殊情况,此时不会创建原始镜像,而是创建一个目录。
|
|
The special option syntax STORAGE_ID:SIZE_IN_GB for storage backed
mount point volumes will automatically allocate a volume of the specified size
on the specified storage. For example, calling 存储支持的挂载点卷的特殊选项语法 STORAGE_ID:SIZE_IN_GB 会自动在指定存储上分配指定大小的卷。例如,调用 |
pct set 100 -mp0 thin1:10,mp=/path/in/container
will allocate a 10GB volume on the storage thin1 and replace the volume ID
place holder 10 with the allocated volume ID, and setup the moutpoint in the
container at /path/in/container
将会在存储 thin1 上分配一个 10GB 的卷,并用分配的卷 ID 替换卷 ID 占位符 10,同时在容器内的 /path/in/container 设置挂载点。
Bind Mount Points 绑定挂载点
Bind mounts allow you to access arbitrary directories from your Proxmox VE host
inside a container. Some potential use cases are:
绑定挂载允许您从 Proxmox VE 主机在容器内访问任意目录。一些潜在的使用场景包括:
-
Accessing your home directory in the guest
在客户机中访问您的主目录 -
Accessing an USB device directory in the guest
在客户机中访问 USB 设备目录 -
Accessing an NFS mount from the host in the guest
从主机在客户机中访问 NFS 挂载
Bind mounts are considered to not be managed by the storage subsystem, so you
cannot make snapshots or deal with quotas from inside the container. With
unprivileged containers you might run into permission problems caused by the
user mapping and cannot use ACLs.
绑定挂载点被视为不受存储子系统管理,因此您无法在容器内部进行快照或处理配额。对于非特权容器,您可能会遇到由用户映射引起的权限问题,且无法使用 ACL。
|
|
The contents of bind mount points are not backed up when using vzdump. 使用 vzdump 时,绑定挂载点的内容不会被备份。 |
|
|
For security reasons, bind mounts should only be established using
source directories especially reserved for this purpose, e.g., a directory
hierarchy under /mnt/bindmounts. Never bind mount system directories like
/, /var or /etc into a container - this poses a great security risk. 出于安全原因,绑定挂载点应仅使用专门为此目的保留的源目录建立,例如/mnt/bindmounts 下的目录层次结构。切勿将系统目录如/、/var 或/etc 绑定挂载到容器中——这会带来极大的安全风险。 |
|
|
The bind mount source path must not contain any symlinks. 绑定挂载的源路径不得包含任何符号链接。 |
For example, to make the directory /mnt/bindmounts/shared accessible in the
container with ID 100 under the path /shared, add a configuration line such as:
例如,要使目录 /mnt/bindmounts/shared 在 ID 为 100 的容器中通过路径 /shared 访问,可以添加如下配置行:
mp0: /mnt/bindmounts/shared,mp=/shared
into /etc/pve/lxc/100.conf.
到 /etc/pve/lxc/100.conf 文件中。
Or alternatively use the pct tool:
或者也可以使用 pct 工具:
pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared
to achieve the same result.
实现相同的结果。
Device Mount Points 设备挂载点
Device mount points allow to mount block devices of the host directly into the
container. Similar to bind mounts, device mounts are not managed by Proxmox VE’s
storage subsystem, but the quota and acl options will be honored.
设备挂载点允许将主机的区块设备直接挂载到容器中。类似于绑定挂载,设备挂载不由 Proxmox VE 的存储子系统管理,但配额和 ACL 选项将被遵守。
|
|
Device mount points should only be used under special circumstances. In
most cases a storage backed mount point offers the same performance and a lot
more features. 设备挂载点应仅在特殊情况下使用。在大多数情况下,基于存储的挂载点提供相同的性能和更多的功能。 |
|
|
The contents of device mount points are not backed up when using
vzdump. 使用 vzdump 时,设备挂载点的内容不会被备份。 |
11.4.5. Network 11.4.5. 网络
You can configure up to 10 network interfaces for a single container.
The corresponding options are called net0 to net9, and they can contain the
following setting:
您可以为单个容器配置最多 10 个网络接口。相应的选项称为 net0 到 net9,它们可以包含以下设置:
- net[n]: name=<string> [,bridge=<bridge>] [,firewall=<1|0>] [,gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,hwaddr=<XX:XX:XX:XX:XX:XX>] [,ip=<(IPv4/CIDR|dhcp|manual)>] [,ip6=<(IPv6/CIDR|auto|dhcp|manual)>] [,link_down=<1|0>] [,mtu=<integer>] [,rate=<mbps>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,type=<veth>]
-
Specifies network interfaces for the container.
指定容器的网络接口。- bridge=<bridge>
-
Bridge to attach the network device to.
要将网络设备连接到的桥接。 - firewall=<boolean>
-
Controls whether this interface’s firewall rules should be used.
控制是否应使用此接口的防火墙规则。 - gw=<GatewayIPv4>
-
Default gateway for IPv4 traffic.
IPv4 流量的默认网关。 - gw6=<GatewayIPv6>
-
Default gateway for IPv6 traffic.
IPv6 流量的默认网关。 - hwaddr=<XX:XX:XX:XX:XX:XX>
-
A common MAC address with the I/G (Individual/Group) bit not set.
一个常见的 MAC 地址,I/G(单个/组)位未设置。 - ip=<(IPv4/CIDR|dhcp|manual)>
-
IPv4 address in CIDR format.
CIDR 格式的 IPv4 地址。 - ip6=<(IPv6/CIDR|auto|dhcp|manual)>
-
IPv6 address in CIDR format.
CIDR 格式的 IPv6 地址。 - link_down=<boolean> link_down=<布尔值>
-
Whether this interface should be disconnected (like pulling the plug).
是否应断开此接口(如拔掉插头)。 -
mtu=<integer> (64 - 65535)
mtu=<整数>(64 - 65535) -
Maximum transfer unit of the interface. (lxc.network.mtu)
接口的最大传输单元。(lxc.network.mtu) - name=<string> name=<字符串>
-
Name of the network device as seen from inside the container. (lxc.network.name)
容器内部看到的网络设备名称。(lxc.network.name) - rate=<mbps> 速率=<mbps>
-
Apply rate limiting to the interface
对接口应用速率限制 -
tag=<integer> (1 - 4094)
标签=<整数>(1 - 4094) -
VLAN tag for this interface.
该接口的 VLAN 标签。 - trunks=<vlanid[;vlanid...]>
-
VLAN ids to pass through the interface
通过该接口传递的 VLAN ID - type=<veth>
-
Network interface type. 网络接口类型。
11.4.6. Automatic Start and Shutdown of Containers
11.4.6. 容器的自动启动和关闭
To automatically start a container when the host system boots, select the
option Start at boot in the Options panel of the container in the web
interface or run the following command:
要在主机系统启动时自动启动容器,请在网页界面的容器选项面板中选择“开机启动”选项,或运行以下命令:
# pct set CTID -onboot 1
If you want to fine tune the boot order of your containers, you can use the
following parameters:
如果您想微调容器的启动顺序,可以使用以下参数:
-
Start/Shutdown order: Defines the start order priority. For example, set it to 1 if you want the CT to be the first to be started. (We use the reverse startup order for shutdown, so a container with a start order of 1 would be the last to be shut down)
启动/关闭顺序:定义启动顺序的优先级。例如,如果您希望该容器成为第一个启动的容器,则将其设置为 1。(关闭时使用相反的启动顺序,因此启动顺序为 1 的容器将是最后一个关闭的) -
Startup delay: Defines the interval between this container start and subsequent containers starts. For example, set it to 240 if you want to wait 240 seconds before starting other containers.
启动延迟:定义此容器启动与后续容器启动之间的间隔。例如,如果您希望在启动其他容器之前等待 240 秒,则将其设置为 240。 -
Shutdown timeout: Defines the duration in seconds Proxmox VE should wait for the container to be offline after issuing a shutdown command. By default this value is set to 60, which means that Proxmox VE will issue a shutdown request, wait 60s for the machine to be offline, and if after 60s the machine is still online will notify that the shutdown action failed.
关闭超时:定义 Proxmox VE 在发出关闭命令后等待容器离线的秒数。默认值为 60,这意味着 Proxmox VE 会发出关闭请求,等待 60 秒以使机器离线,如果 60 秒后机器仍然在线,则会通知关闭操作失败。
Please note that containers without a Start/Shutdown order parameter will
always start after those where the parameter is set, and this parameter only
makes sense between the machines running locally on a host, and not
cluster-wide.
请注意,没有设置启动/关闭顺序参数的容器将始终在设置了该参数的容器之后启动,并且该参数仅在主机上本地运行的机器之间有意义,而不适用于整个集群。
If you require a delay between the host boot and the booting of the first
container, see the section on
Proxmox VE Node Management.
如果您需要在主机启动和第一个容器启动之间设置延迟,请参阅 Proxmox VE 节点管理部分。
11.4.7. Hookscripts 11.4.7. 钩子脚本
You can add a hook script to CTs with the config property hookscript.
您可以通过配置属性 hookscript 向 CT 添加钩子脚本。
# pct set 100 -hookscript local:snippets/hookscript.pl
It will be called during various phases of the guests lifetime. For an example
and documentation see the example script under
/usr/share/pve-docs/examples/guest-example-hookscript.pl.
它将在客户机生命周期的各个阶段被调用。示例和文档请参见 /usr/share/pve-docs/examples/guest-example-hookscript.pl 下的示例脚本。
11.5. Security Considerations
11.5. 安全注意事项
Containers use the kernel of the host system. This exposes an attack surface
for malicious users. In general, full virtual machines provide better
isolation. This should be considered if containers are provided to unknown or
untrusted people.
容器使用主机系统的内核。这为恶意用户暴露了攻击面。一般来说,完整的虚拟机提供更好的隔离。如果容器提供给未知或不受信任的人,应考虑这一点。
To reduce the attack surface, LXC uses many security features like AppArmor,
CGroups and kernel namespaces.
为了减少攻击面,LXC 使用了许多安全特性,如 AppArmor、CGroups 和内核命名空间。
11.5.1. AppArmor
AppArmor profiles are used to restrict access to possibly dangerous actions.
Some system calls, i.e. mount, are prohibited from execution.
AppArmor 配置文件用于限制对可能危险操作的访问。一些系统调用,例如 mount,被禁止执行。
To trace AppArmor activity, use:
要跟踪 AppArmor 活动,请使用:
# dmesg | grep apparmor
Although it is not recommended, AppArmor can be disabled for a container. This
brings security risks with it. Some syscalls can lead to privilege escalation
when executed within a container if the system is misconfigured or if a LXC or
Linux Kernel vulnerability exists.
虽然不推荐,但可以为容器禁用 AppArmor。这会带来安全风险。如果系统配置错误或存在 LXC 或 Linux 内核漏洞,某些系统调用在容器内执行时可能导致权限提升。
To disable AppArmor for a container, add the following line to the container
configuration file located at /etc/pve/lxc/CTID.conf:
要禁用容器的 AppArmor,请在位于 /etc/pve/lxc/CTID.conf 的容器配置文件中添加以下行:
lxc.apparmor.profile = unconfined
|
|
Please note that this is not recommended for production use. 请注意,这不建议在生产环境中使用。 |
11.5.2. Control Groups (cgroup)
11.5.2. 控制组(cgroup)
cgroup is a kernel
mechanism used to hierarchically organize processes and distribute system
resources.
cgroup 是一种内核机制,用于分层组织进程并分配系统资源。
The main resources controlled via cgroups are CPU time, memory and swap
limits, and access to device nodes. cgroups are also used to "freeze" a
container before taking snapshots.
通过 cgroups 控制的主要资源包括 CPU 时间、内存和交换空间限制,以及对设备节点的访问。cgroups 还用于在拍摄快照之前“冻结”容器。
There are 2 versions of cgroups currently available,
legacy
and
cgroupv2.
目前有两种版本的 cgroups 可用,分别是传统版和 cgroupv2。
Since Proxmox VE 7.0, the default is a pure cgroupv2 environment. Previously a
"hybrid" setup was used, where resource control was mainly done in cgroupv1
with an additional cgroupv2 controller which could take over some subsystems
via the cgroup_no_v1 kernel command-line parameter. (See the
kernel
parameter documentation for details.)
自 Proxmox VE 7.0 起,默认环境为纯 cgroupv2。之前使用的是“混合”设置,资源控制主要在 cgroupv1 中进行,同时附加了一个 cgroupv2 控制器,可以通过 cgroup_no_v1 内核命令行参数接管某些子系统。(详情请参见内核参数文档。)
CGroup Version Compatibility
CGroup 版本兼容性
The main difference between pure cgroupv2 and the old hybrid environments
regarding Proxmox VE is that with cgroupv2 memory and swap are now controlled
independently. The memory and swap settings for containers can map directly to
these values, whereas previously only the memory limit and the limit of the
sum of memory and swap could be limited.
关于 Proxmox VE,纯 cgroupv2 与旧的混合环境之间的主要区别在于,cgroupv2 中内存和交换空间(swap)现在是独立控制的。容器的内存和交换空间设置可以直接映射到这些值,而之前只能限制内存上限和内存与交换空间总和的上限。
Another important difference is that the devices controller is configured in a
completely different way. Because of this, file system quotas are currently not
supported in a pure cgroupv2 environment.
另一个重要区别是设备控制器的配置方式完全不同。因此,在纯 cgroupv2 环境中,目前不支持文件系统配额。
cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2
environment. Containers running systemd version 231 or newer support
cgroupv2 [50], as do containers not using systemd as init
system [51].
容器操作系统需要支持 cgroupv2 才能在纯 cgroupv2 环境中运行。运行 systemd 版本 231 或更新版本的容器支持 cgroupv2[50],不使用 systemd 作为初始化系统的容器也支持 cgroupv2[51]。
|
|
CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases,
which have a systemd version that is too old to run in a cgroupv2
environment, you can either
|
Changing CGroup Version 更改 CGroup 版本
|
|
If file system quotas are not required and all containers support cgroupv2,
it is recommended to stick to the new default. 如果不需要文件系统配额且所有容器都支持 cgroupv2,建议保持使用新的默认版本。 |
To switch back to the previous version the following kernel command-line
parameter can be used:
要切换回之前的版本,可以使用以下内核命令行参数:
systemd.unified_cgroup_hierarchy=0
See this section on editing the kernel boot
command line on where to add the parameter.
请参阅本节关于编辑内核启动命令行的内容,了解参数应添加的位置。
11.6. Guest Operating System Configuration
11.6. 客户操作系统配置
Proxmox VE tries to detect the Linux distribution in the container, and modifies
some files. Here is a short list of things done at container startup:
Proxmox VE 会尝试检测容器中的 Linux 发行版,并修改一些文件。以下是在容器启动时执行的一些操作的简短列表:
- set /etc/hostname 设置 /etc/hostname
-
to set the container name
以设置容器名称 - modify /etc/hosts 修改 /etc/hosts
-
to allow lookup of the local hostname
以允许本地主机名的查找 - network setup 网络设置
-
pass the complete network setup to the container
将完整的网络设置传递给容器 - configure DNS 配置 DNS
-
pass information about DNS servers
传递有关 DNS 服务器的信息 -
adapt the init system
调整初始化系统 -
for example, fix the number of spawned getty processes
例如,修正生成的 getty 进程数量 -
set the root password
设置 root 密码 -
when creating a new container
创建新容器时 - rewrite ssh_host_keys 重写 ssh_host_keys
-
so that each container has unique keys
以便每个容器都有唯一的密钥 - randomize crontab 随机化 crontab
-
so that cron does not start at the same time on all containers
这样 cron 就不会在所有容器上同时启动
Changes made by Proxmox VE are enclosed by comment markers:
Proxmox VE 所做的更改会被注释标记包围:
# --- BEGIN PVE --- <data> # --- END PVE ---
Those markers will be inserted at a reasonable location in the file. If such a
section already exists, it will be updated in place and will not be moved.
这些标记会插入到文件中的合理位置。如果该部分已存在,则会在原地更新,不会被移动。
Modification of a file can be prevented by adding a .pve-ignore. file for it.
For instance, if the file /etc/.pve-ignore.hosts exists then the /etc/hosts
file will not be touched. This can be a simple empty file created via:
通过为文件添加一个 .pve-ignore. 文件,可以防止该文件被修改。例如,如果存在 /etc/.pve-ignore.hosts 文件,那么 /etc/hosts 文件将不会被更改。这个文件可以是一个简单的空文件,创建命令如下:
# touch /etc/.pve-ignore.hosts
Most modifications are OS dependent, so they differ between different
distributions and versions. You can completely disable modifications by
manually setting the ostype to unmanaged.
大多数修改依赖于操作系统,因此在不同的发行版和版本之间会有所不同。您可以通过手动将 ostype 设置为 unmanaged 来完全禁用修改。
OS type detection is done by testing for certain files inside the
container. Proxmox VE first checks the /etc/os-release file
[52].
If that file is not present, or it does not contain a clearly recognizable
distribution identifier the following distribution specific release files are
checked.
操作系统类型的检测是通过测试容器内的某些文件来完成的。Proxmox VE 首先检查 /etc/os-release 文件 [52]。如果该文件不存在,或者其中不包含明确可识别的发行版标识符,则会检查以下特定发行版的发布文件。
- Ubuntu
-
inspect /etc/lsb-release (DISTRIB_ID=Ubuntu)
查看 /etc/lsb-release(DISTRIB_ID=Ubuntu) - Debian
-
test /etc/debian_version
测试 /etc/debian_version - Fedora
-
test /etc/fedora-release
测试 /etc/fedora-release - RedHat or CentOS RedHat 或 CentOS
-
test /etc/redhat-release
测试 /etc/redhat-release - ArchLinux
-
test /etc/arch-release 测试 /etc/arch-release
- Alpine
-
test /etc/alpine-release
测试 /etc/alpine-release - Gentoo
-
test /etc/gentoo-release
测试 /etc/gentoo-release
|
|
Container start fails if the configured ostype differs from the auto
detected type. 如果配置的操作系统类型与自动检测的类型不同,容器启动将失败。 |
11.7. Container Storage 11.7. 容器存储
The Proxmox VE LXC container storage model is more flexible than traditional
container storage models. A container can have multiple mount points. This
makes it possible to use the best suited storage for each application.
Proxmox VE 的 LXC 容器存储模型比传统的容器存储模型更灵活。一个容器可以有多个挂载点。这使得可以为每个应用程序使用最合适的存储。
For example the root file system of the container can be on slow and cheap
storage while the database can be on fast and distributed storage via a second
mount point. See section Mount Points for further
details.
例如,容器的根文件系统可以位于速度较慢且廉价的存储上,而数据库则可以通过第二个挂载点位于快速且分布式的存储上。有关详细信息,请参见“挂载点”一节。
Any storage type supported by the Proxmox VE storage library can be used. This means
that containers can be stored on local (for example lvm, zfs or directory),
shared external (like iSCSI, NFS) or even distributed storage systems like
Ceph. Advanced storage features like snapshots or clones can be used if the
underlying storage supports them. The vzdump backup tool can use snapshots to
provide consistent container backups.
Proxmox VE 存储库支持的任何存储类型都可以使用。这意味着容器可以存储在本地(例如 lvm、zfs 或目录)、共享外部存储(如 iSCSI、NFS)甚至分布式存储系统(如 Ceph)上。如果底层存储支持,先进的存储功能如快照或克隆也可以使用。vzdump 备份工具可以使用快照来提供一致的容器备份。
Furthermore, local devices or local directories can be mounted directly using
bind mounts. This gives access to local resources inside a container with
practically zero overhead. Bind mounts can be used as an easy way to share data
between containers.
此外,本地设备或本地目录可以通过绑定挂载直接挂载。这使得容器内可以访问本地资源,几乎没有任何开销。绑定挂载可以作为在容器之间共享数据的简便方式。
11.7.1. FUSE Mounts 11.7.1. FUSE 挂载
|
|
Because of existing issues in the Linux kernel’s freezer subsystem the
usage of FUSE mounts inside a container is strongly advised against, as
containers need to be frozen for suspend or snapshot mode backups. 由于 Linux 内核的 freezer 子系统存在已知问题,强烈建议不要在容器内使用 FUSE 挂载,因为容器需要被冻结以进行挂起或快照模式备份。 |
If FUSE mounts cannot be replaced by other mounting mechanisms or storage
technologies, it is possible to establish the FUSE mount on the Proxmox host
and use a bind mount point to make it accessible inside the container.
如果无法用其他挂载机制或存储技术替代 FUSE 挂载,可以在 Proxmox 主机上建立 FUSE 挂载,并使用绑定挂载点使其在容器内可访问。
11.7.2. Using Quotas Inside Containers
11.7.2. 容器内使用配额
Quotas allow to set limits inside a container for the amount of disk space that
each user can use.
配额允许在容器内为每个用户设置磁盘空间使用限制。
|
|
This currently requires the use of legacy cgroups. 这目前需要使用传统的 cgroups。 |
|
|
This only works on ext4 image based storage types and currently only
works with privileged containers. 这仅适用于基于 ext4 镜像的存储类型,并且目前仅适用于特权容器。 |
Activating the quota option causes the following mount options to be used for
a mount point:
usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0
启用配额选项会导致挂载点使用以下挂载选项:usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0
This allows quotas to be used like on any other system. You can initialize the
/aquota.user and /aquota.group files by running:
这允许像在其他系统上一样使用配额。你可以通过运行以下命令初始化 /aquota.user 和 /aquota.group 文件:
# quotacheck -cmug / # quotaon /
Then edit the quotas using the edquota command. Refer to the documentation of
the distribution running inside the container for details.
然后使用 edquota 命令编辑配额。具体细节请参考容器内运行的发行版文档。
|
|
You need to run the above commands for every mount point by passing the
mount point’s path instead of just /. 您需要针对每个挂载点运行上述命令,传入挂载点的路径,而不仅仅是 /。 |
11.7.3. Using ACLs Inside Containers
11.7.3. 容器内使用 ACL
The standard Posix Access Control Lists are also available inside
containers. ACLs allow you to set more detailed file ownership than the
traditional user/group/others model.
标准的 Posix 访问控制列表(ACL)在容器内同样可用。ACL 允许您设置比传统的用户/组/其他模型更详细的文件所有权。
11.7.4. Backup of Container mount points
11.7.4. 容器挂载点的备份
To include a mount point in backups, enable the backup option for it in the
container configuration. For an existing mount point mp0
要将挂载点包含在备份中,请在容器配置中启用该挂载点的备份选项。对于已有的挂载点 mp0
mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
add backup=1 to enable it.
添加 backup=1 以启用备份。
mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
|
|
When creating a new mount point in the GUI, this option is enabled by
default. 在 GUI 中创建新的挂载点时,该选项默认启用。 |
To disable backups for a mount point, add backup=0 in the way described
above, or uncheck the Backup checkbox on the GUI.
要禁用挂载点的备份,请按照上述方法添加 backup=0,或在图形界面中取消选中备份复选框。
11.7.5. Replication of Containers mount points
11.7.5. 容器挂载点的复制
By default, additional mount points are replicated when the Root Disk is
replicated. If you want the Proxmox VE storage replication mechanism to skip a mount
point, you can set the Skip replication option for that mount point.
As of Proxmox VE 5.0, replication requires a storage of type zfspool. Adding a
mount point to a different type of storage when the container has replication
configured requires to have Skip replication enabled for that mount point.
默认情况下,当根磁盘被复制时,附加的挂载点也会被复制。如果您希望 Proxmox VE 存储复制机制跳过某个挂载点,可以为该挂载点设置跳过复制选项。从 Proxmox VE 5.0 开始,复制需要使用类型为 zfspool 的存储。当容器配置了复制且将挂载点添加到不同类型的存储时,必须为该挂载点启用跳过复制。
11.8. Backup and Restore
11.8. 备份与恢复
11.8.1. Container Backup
11.8.1. 容器备份
It is possible to use the vzdump tool for container backup. Please refer to
the vzdump manual page for details.
可以使用 vzdump 工具进行容器备份。详情请参阅 vzdump 手册页。
11.8.2. Restoring Container Backups
11.8.2. 恢复容器备份
Restoring container backups made with vzdump is possible using the pct
restore command. By default, pct restore will attempt to restore as much of
the backed up container configuration as possible. It is possible to override
the backed up configuration by manually setting container options on the
command line (see the pct manual page for details).
可以使用 pct restore 命令恢复使用 vzdump 制作的容器备份。默认情况下,pct restore 会尽可能恢复备份的容器配置。也可以通过在命令行手动设置容器选项来覆盖备份的配置(详情请参阅 pct 手册页)。
|
|
pvesm extractconfig can be used to view the backed up configuration
contained in a vzdump archive. pvesm extractconfig 可用于查看包含在 vzdump 归档中的备份配置。 |
There are two basic restore modes, only differing by their handling of mount
points:
有两种基本的恢复模式,仅在挂载点的处理方式上有所不同:
“Simple” Restore Mode “简单”恢复模式
If neither the rootfs parameter nor any of the optional mpX parameters are
explicitly set, the mount point configuration from the backed up configuration
file is restored using the following steps:
如果既未显式设置 rootfs 参数,也未设置任何可选的 mpX 参数,则使用以下步骤恢复备份配置文件中的挂载点配置:
-
Extract mount points and their options from backup
从备份中提取挂载点及其选项 -
Create volumes for storage backed mount points on the storage provided with the storage parameter (default: local).
为存储参数提供的存储(默认:local)上的存储支持挂载点创建卷 -
Extract files from backup archive
从备份归档中提取文件 -
Add bind and device mount points to restored configuration (limited to root user)
将绑定和设备挂载点添加到恢复的配置中(仅限 root 用户)
|
|
Since bind and device mount points are never backed up, no files are
restored in the last step, but only the configuration options. The assumption
is that such mount points are either backed up with another mechanism (e.g.,
NFS space that is bind mounted into many containers), or not intended to be
backed up at all. 由于绑定挂载点和设备挂载点从不进行备份,最后一步不会恢复任何文件,只恢复配置选项。假设此类挂载点要么通过其他机制备份(例如,绑定挂载到多个容器中的 NFS 空间),要么根本不打算备份。 |
This simple mode is also used by the container restore operations in the web
interface.
这种简单模式也被用于网页界面的容器恢复操作。
“Advanced” Restore Mode “高级”恢复模式
By setting the rootfs parameter (and optionally, any combination of mpX
parameters), the pct restore command is automatically switched into an
advanced mode. This advanced mode completely ignores the rootfs and mpX
configuration options contained in the backup archive, and instead only uses
the options explicitly provided as parameters.
通过设置 rootfs 参数(以及可选的任意组合的 mpX 参数),pct restore 命令会自动切换到高级模式。该高级模式完全忽略备份归档中包含的 rootfs 和 mpX 配置选项,而只使用显式作为参数提供的选项。
This mode allows flexible configuration of mount point settings at restore
time, for example:
此模式允许在恢复时灵活配置挂载点设置,例如:
-
Set target storages, volume sizes and other options for each mount point individually
为每个挂载点单独设置目标存储、卷大小及其他选项 -
Redistribute backed up files according to new mount point scheme
根据新的挂载点方案重新分配备份文件 -
Restore to device and/or bind mount points (limited to root user)
恢复到设备和/或绑定挂载点(仅限 root 用户)
11.9. Managing Containers with pct
11.9. 使用 pct 管理容器
The “Proxmox Container Toolkit” (pct) is the command-line tool to manage
Proxmox VE containers. It enables you to create or destroy containers, as well as
control the container execution (start, stop, reboot, migrate, etc.). It can be
used to set parameters in the config file of a container, for example the
network configuration or memory limits.
“Proxmox 容器工具包”(pct)是管理 Proxmox VE 容器的命令行工具。它使您能够创建或销毁容器,以及控制容器的运行(启动、停止、重启、迁移等)。它还可以用于设置容器配置文件中的参数,例如网络配置或内存限制。
11.9.1. CLI Usage Examples
11.9.1. 命令行使用示例
Create a container based on a Debian template (provided you have already
downloaded the template via the web interface)
基于 Debian 模板创建容器(前提是您已经通过网页界面下载了该模板)
# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
Start container 100 启动容器 100
# pct start 100
Start a login session via getty
通过 getty 启动登录会话
# pct console 100
Enter the LXC namespace and run a shell as root user
进入 LXC 命名空间并以 root 用户身份运行 Shell
# pct enter 100
Display the configuration
显示配置
# pct config 100
Add a network interface called eth0, bridged to the host bridge vmbr0, set
the address and gateway, while it’s running
添加一个名为 eth0 的网络接口,桥接到主机桥接 vmbr0,设置地址和网关,同时保持运行状态
# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
Reduce the memory of the container to 512MB
将容器的内存减少到 512MB
# pct set 100 -memory 512
Destroying a container always removes it from Access Control Lists and it always
removes the firewall configuration of the container. You have to activate
--purge, if you want to additionally remove the container from replication jobs,
backup jobs and HA resource configurations.
销毁容器时会自动将其从访问控制列表中移除,并且会自动删除容器的防火墙配置。如果你还想将容器从复制任务、备份任务和高可用资源配置中移除,必须激活 --purge 选项。
# pct destroy 100 --purge
Move a mount point volume to a different storage.
将挂载点卷移动到不同的存储。
# pct move-volume 100 mp0 other-storage
Reassign a volume to a different CT. This will remove the volume mp0 from
the source CT and attaches it as mp1 to the target CT. In the background
the volume is being renamed so that the name matches the new owner.
将卷重新分配给不同的 CT。这将从源 CT 中移除卷 mp0,并将其作为 mp1 附加到目标 CT。在后台,卷的名称会被重命名,以匹配新的所有者。
# pct move-volume 100 mp0 --target-vmid 200 --target-volume mp1
11.9.2. Obtaining Debugging Logs
11.9.2. 获取调试日志
In case pct start is unable to start a specific container, it might be
helpful to collect debugging output by passing the --debug flag (replace CTID with
the container’s CTID):
如果 pct start 无法启动特定容器,传递--debug 标志(将 CTID 替换为容器的 CTID)收集调试输出可能会有所帮助:
# pct start CTID --debug
Alternatively, you can use the following lxc-start command, which will save
the debug log to the file specified by the -o output option:
或者,您可以使用以下 lxc-start 命令,它会将调试日志保存到由-o 输出选项指定的文件中:
# lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log
This command will attempt to start the container in foreground mode, to stop
the container run pct shutdown CTID or pct stop CTID in a second terminal.
此命令将尝试以前台模式启动容器,要停止容器,请在另一个终端运行 pct shutdown CTID 或 pct stop CTID。
The collected debug log is written to /tmp/lxc-CTID.log.
收集的调试日志写入 /tmp/lxc-CTID.log。
|
|
If you have changed the container’s configuration since the last start
attempt with pct start, you need to run pct start at least once to also
update the configuration used by lxc-start. 如果自上次使用 pct start 启动尝试以来更改了容器的配置,则需要至少运行一次 pct start,以更新 lxc-start 使用的配置。 |
11.10. Migration 11.10. 迁移
If you have a cluster, you can migrate your Containers with
如果你有一个集群,你可以使用以下命令迁移你的容器
# pct migrate <ctid> <target>
This works as long as your Container is offline. If it has local volumes or
mount points defined, the migration will copy the content over the network to
the target host if the same storage is defined there.
只要你的容器处于离线状态,这个方法就有效。如果定义了本地卷或挂载点,迁移时会将内容通过网络复制到目标主机,前提是目标主机也定义了相同的存储。
Running containers cannot live-migrated due to technical limitations. You can
do a restart migration, which shuts down, moves and then starts a container
again on the target node. As containers are very lightweight, this results
normally only in a downtime of some hundreds of milliseconds.
由于技术限制,运行中的容器无法进行实时迁移。你可以执行重启迁移,这会先关闭容器,移动容器,然后在目标节点上重新启动容器。由于容器非常轻量级,通常只会导致几百毫秒的停机时间。
A restart migration can be done through the web interface or by using the
--restart flag with the pct migrate command.
重启迁移可以通过网页界面完成,也可以使用 pct migrate 命令加上 --restart 参数来执行。
A restart migration will shut down the Container and kill it after the
specified timeout (the default is 180 seconds). Then it will migrate the
Container like an offline migration and when finished, it starts the Container
on the target node.
重启迁移将关闭容器,并在指定的超时时间后终止它(默认是 180 秒)。然后,它会像离线迁移一样迁移容器,完成后在目标节点上启动容器。
11.11. Configuration 11.11. 配置
The /etc/pve/lxc/<CTID>.conf file stores container configuration, where
<CTID> is the numeric ID of the given container. Like all other files stored
inside /etc/pve/, they get automatically replicated to all other cluster
nodes.
/etc/pve/lxc/<CTID>.conf 文件存储容器配置,其中 <CTID> 是给定容器的数字 ID。与存储在 /etc/pve/ 中的所有其他文件一样,它们会自动复制到所有其他集群节点。
|
|
CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
unique cluster wide. CTID 小于 100 的保留用于内部用途,且 CTID 在整个集群中必须唯一。 |
示例容器配置
ostype: debian arch: amd64 hostname: www memory: 512 swap: 512 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth rootfs: local:107/vm-107-disk-1.raw,size=7G
The configuration files are simple text files. You can edit them using a normal
text editor, for example, vi or nano.
This is sometimes useful to do small corrections, but keep in mind that you
need to restart the container to apply such changes.
配置文件是简单的文本文件。您可以使用普通的文本编辑器编辑它们,例如 vi 或 nano。有时这样做对进行小的修正很有用,但请记住,您需要重启容器才能使这些更改生效。
For that reason, it is usually better to use the pct command to generate and
modify those files, or do the whole thing using the GUI.
Our toolkit is smart enough to instantaneously apply most changes to running
containers. This feature is called “hot plug”, and there is no need to restart
the container in that case.
因此,通常更好使用 pct 命令来生成和修改这些文件,或者通过图形界面完成整个操作。我们的工具包足够智能,能够即时应用大多数对运行中容器的更改。此功能称为“热插拔”,在这种情况下无需重启容器。
In cases where a change cannot be hot-plugged, it will be registered as a
pending change (shown in red color in the GUI).
They will only be applied after rebooting the container.
对于无法热插拔的更改,它们将被登记为待处理更改(在图形界面中以红色显示)。这些更改只有在重启容器后才会生效。
11.11.1. File Format 11.11.1. 文件格式
The container configuration file uses a simple colon separated key/value
format. Each line has the following format:
容器配置文件使用简单的冒号分隔的键/值格式。每一行的格式如下:
# this is a comment OPTION: value
Blank lines in those files are ignored, and lines starting with a # character
are treated as comments and are also ignored.
文件中的空行会被忽略,以#字符开头的行被视为注释,也会被忽略。
It is possible to add low-level, LXC style configuration directly, for example:
可以直接添加低级的 LXC 风格配置,例如:
lxc.init_cmd: /sbin/my_own_init
or 或者
lxc.init_cmd = /sbin/my_own_init
The settings are passed directly to the LXC low-level tools.
这些设置会直接传递给 LXC 低级工具。
11.11.2. Snapshots 11.11.2. 快照
When you create a snapshot, pct stores the configuration at snapshot time
into a separate snapshot section within the same configuration file. For
example, after creating a snapshot called “testsnapshot”, your configuration
file will look like this:
当你创建快照时,pct 会将快照时的配置存储到同一配置文件中的一个单独快照部分。例如,在创建名为“testsnapshot”的快照后,你的配置文件将如下所示:
容器配置与快照
memory: 512 swap: 512 parent: testsnaphot ... [testsnaphot] memory: 512 swap: 512 snaptime: 1457170803 ...
There are a few snapshot related properties like parent and snaptime. The
parent property is used to store the parent/child relationship between
snapshots. snaptime is the snapshot creation time stamp (Unix epoch).
有一些与快照相关的属性,如 parent 和 snaptime。parent 属性用于存储快照之间的父子关系。snaptime 是快照的创建时间戳(Unix 纪元时间)。
11.11.3. Options 11.11.3. 选项
-
arch: <amd64 | arm64 | armhf | i386 | riscv32 | riscv64> (default = amd64)
arch: <amd64 | arm64 | armhf | i386 | riscv32 | riscv64>(默认 = amd64) -
OS architecture type. 操作系统架构类型。
-
cmode: <console | shell | tty> (default = tty)
cmode: <console | shell | tty>(默认 = tty) -
Console mode. By default, the console command tries to open a connection to one of the available tty devices. By setting cmode to console it tries to attach to /dev/console instead. If you set cmode to shell, it simply invokes a shell inside the container (no login).
控制台模式。默认情况下,console 命令尝试连接到可用的 tty 设备之一。将 cmode 设置为 console 时,它会尝试连接到 /dev/console。若将 cmode 设置为 shell,则会在容器内直接调用一个 Shell(无需登录)。 -
console: <boolean> (default = 1)
console: <boolean>(默认 = 1) -
Attach a console device (/dev/console) to the container.
将控制台设备(/dev/console)附加到容器。 -
cores: <integer> (1 - 8192)
cores: <整数>(1 - 8192) -
The number of cores assigned to the container. A container can use all available cores by default.
分配给容器的核心数。容器默认可以使用所有可用核心。 -
cpulimit: <number> (0 - 8192) (default = 0)
cpulimit: <数字>(0 - 8192)(默认 = 0) -
Limit of CPU usage.
CPU 使用限制。If the computer has 2 CPUs, it has a total of 2 CPU time. Value 0 indicates no CPU limit.
如果计算机有 2 个 CPU,则总共有 2 个 CPU 时间。值为 0 表示没有 CPU 限制。 -
cpuunits: <integer> (0 - 500000) (default = cgroup v1: 1024, cgroup v2: 100)
cpuunits: <整数>(0 - 500000)(默认值 = cgroup v1: 1024,cgroup v2: 100) -
CPU weight for a container. Argument is used in the kernel fair scheduler. The larger the number is, the more CPU time this container gets. Number is relative to the weights of all the other running guests.
容器的 CPU 权重。该参数用于内核公平调度器。数字越大,该容器获得的 CPU 时间越多。该数字相对于所有其他正在运行的虚拟机的权重。 -
debug: <boolean> (default = 0)
调试:<boolean>(默认值 = 0) -
Try to be more verbose. For now this only enables debug log-level on start.
尝试输出更详细的信息。目前这仅在启动时启用调试日志级别。 - description: <string> 描述:<string>
-
Description for the Container. Shown in the web-interface CT’s summary. This is saved as comment inside the configuration file.
容器的描述。在网页界面容器摘要中显示。此信息作为注释保存在配置文件中。 -
dev[n]: [[path=]<Path>] [,deny-write=<1|0>] [,gid=<integer>] [,mode=<Octal access mode>] [,uid=<integer>]
dev[n]: [[path=]<路径>] [,deny-write=<1|0>] [,gid=<整数>] [,mode=<八进制访问模式>] [,uid=<整数>] -
Device to pass through to the container
要传递给容器的设备-
deny-write=<boolean> (default = 0)
deny-write=<布尔值>(默认 = 0) -
Deny the container to write to the device
禁止容器写入该设备 -
gid=<integer> (0 - N)
gid=<整数> (0 - N) -
Group ID to be assigned to the device node
分配给设备节点的组 ID -
mode=<Octal access mode>
mode=<八进制访问模式> -
Access mode to be set on the device node
设置在设备节点上的访问模式 - path=<Path>
-
Path to the device to pass through to the container
要传递给容器的设备路径 - uid=<integer> (0 - N)
-
User ID to be assigned to the device node
分配给设备节点的用户 ID
-
deny-write=<boolean> (default = 0)
- features: [force_rw_sys=<1|0>] [,fuse=<1|0>] [,keyctl=<1|0>] [,mknod=<1|0>] [,mount=<fstype;fstype;...>] [,nesting=<1|0>]
-
Allow containers access to advanced features.
允许容器访问高级功能。-
force_rw_sys=<boolean> (default = 0)
force_rw_sys=<boolean>(默认 = 0) -
Mount /sys in unprivileged containers as rw instead of mixed. This can break networking under newer (>= v245) systemd-network use.
在非特权容器中将 /sys 挂载为读写模式,而非混合模式。这可能会导致在较新(>= v245)systemd-network 使用下网络出现问题。 -
fuse=<boolean> (default = 0)
fuse=<boolean>(默认值 = 0) -
Allow using fuse file systems in a container. Note that interactions between fuse and the freezer cgroup can potentially cause I/O deadlocks.
允许在容器中使用 fuse 文件系统。请注意,fuse 与 freezer cgroup 之间的交互可能会导致 I/O 死锁。 -
keyctl=<boolean> (default = 0)
keyctl=<boolean>(默认值 = 0) -
For unprivileged containers only: Allow the use of the keyctl() system call. This is required to use docker inside a container. By default unprivileged containers will see this system call as non-existent. This is mostly a workaround for systemd-networkd, as it will treat it as a fatal error when some keyctl() operations are denied by the kernel due to lacking permissions. Essentially, you can choose between running systemd-networkd or docker.
仅针对非特权容器:允许使用 keyctl() 系统调用。运行容器内的 docker 需要此功能。默认情况下,非特权容器会将此系统调用视为不存在。这主要是为了解决 systemd-networkd 的问题,因为当内核因权限不足拒绝某些 keyctl() 操作时,systemd-networkd 会将其视为致命错误。基本上,你可以选择运行 systemd-networkd 或 docker。 -
mknod=<boolean> (default = 0)
mknod=<布尔值>(默认 = 0) -
Allow unprivileged containers to use mknod() to add certain device nodes. This requires a kernel with seccomp trap to user space support (5.3 or newer). This is experimental.
允许非特权容器使用 mknod() 添加某些设备节点。这需要内核支持 seccomp 陷阱到用户空间(5.3 或更高版本)。此功能处于实验阶段。 -
mount=<fstype;fstype;...>
mount=<文件系统类型;文件系统类型;...> -
Allow mounting file systems of specific types. This should be a list of file system types as used with the mount command. Note that this can have negative effects on the container’s security. With access to a loop device, mounting a file can circumvent the mknod permission of the devices cgroup, mounting an NFS file system can block the host’s I/O completely and prevent it from rebooting, etc.
允许挂载特定类型的文件系统。这里应填写与 mount 命令中使用的文件系统类型列表。请注意,这可能对容器的安全性产生负面影响。通过访问环回设备,挂载文件可以绕过设备 cgroup 的 mknod 权限;挂载 NFS 文件系统可能会完全阻塞主机的 I/O 并阻止其重启,等等。 -
nesting=<boolean> (default = 0)
nesting=<boolean>(默认值 = 0) -
Allow nesting. Best used with unprivileged containers with additional id mapping. Note that this will expose procfs and sysfs contents of the host to the guest.
允许嵌套。最好与具有额外 ID 映射的非特权容器一起使用。请注意,这将使宿主机的 procfs 和 sysfs 内容暴露给客户机。
-
force_rw_sys=<boolean> (default = 0)
- hookscript: <string>
-
Script that will be executed during various steps in the containers lifetime.
将在容器生命周期的各个步骤中执行的脚本。 - hostname: <string>
-
Set a host name for the container.
为容器设置主机名。 - lock: <backup | create | destroyed | disk | fstrim | migrate | mounted | rollback | snapshot | snapshot-delete>
-
Lock/unlock the container.
锁定/解锁容器。 -
memory: <integer> (16 - N) (default = 512)
memory: <整数> (16 - N) (默认 = 512) -
Amount of RAM for the container in MB.
容器的内存大小,单位为 MB。 -
mp[n]: [volume=]<volume> ,mp=<Path> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
mp[n]: [volume=]<卷> ,mp=<路径> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<选项[;选项...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<磁盘大小>] -
Use volume as container mount point. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
使用卷作为容器挂载点。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。- acl=<boolean>
-
Explicitly enable or disable ACL support.
显式启用或禁用 ACL 支持。 - backup=<boolean>
-
Whether to include the mount point in backups (only used for volume mount points).
是否将挂载点包含在备份中(仅用于卷挂载点)。 - mountoptions=<opt[;opt...]>
-
Extra mount options for rootfs/mps.
rootfs/mps 的额外挂载选项。 - mp=<Path>
-
Path to the mount point as seen from inside the container.
从容器内部看到的挂载点路径。Must not contain any symlinks for security reasons.
出于安全原因,不能包含任何符号链接。 - quota=<boolean>
-
Enable user quotas inside the container (not supported with zfs subvolumes)
启用容器内的用户配额(不支持 zfs 子卷) -
replicate=<boolean> (default = 1)
replicate=<boolean>(默认 = 1) -
Will include this volume to a storage replica job.
将此卷包含到存储副本任务中。 - ro=<boolean>
-
Read-only mount point 只读挂载点
-
shared=<boolean> (default = 0)
shared=<boolean>(默认 = 0) -
Mark this non-volume mount point as available on all nodes.
将此非卷挂载点标记为在所有节点上可用。This option does not share the mount point automatically, it assumes it is shared already!
此选项不会自动共享挂载点,它假设挂载点已经被共享! - size=<DiskSize>
-
Volume size (read only value).
卷大小(只读值)。 - volume=<volume>
-
Volume, device or directory to mount into the container.
要挂载到容器中的卷、设备或目录。
- nameserver: <string>
-
Sets DNS server IP address for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 服务器 IP 地址。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 -
net[n]: name=<string> [,bridge=<bridge>] [,firewall=<1|0>] [,gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,hwaddr=<XX:XX:XX:XX:XX:XX>] [,ip=<(IPv4/CIDR|dhcp|manual)>] [,ip6=<(IPv6/CIDR|auto|dhcp|manual)>] [,link_down=<1|0>] [,mtu=<integer>] [,rate=<mbps>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,type=<veth>]
net[n]:name=<字符串> [,bridge=<桥接>] [,firewall=<1|0>] [,gw=<IPv4 网关>] [,gw6=<IPv6 网关>] [,hwaddr=<XX:XX:XX:XX:XX:XX>] [,ip=<(IPv4/CIDR|dhcp|manual)>] [,ip6=<(IPv6/CIDR|auto|dhcp|manual)>] [,link_down=<1|0>] [,mtu=<整数>] [,rate=<兆比特每秒>] [,tag=<整数>] [,trunks=<vlanid[;vlanid...]>] [,type=<veth>] -
Specifies network interfaces for the container.
指定容器的网络接口。- bridge=<bridge> bridge=<桥接>
-
Bridge to attach the network device to.
将网络设备连接到的桥接。 - firewall=<boolean>
-
Controls whether this interface’s firewall rules should be used.
控制是否应使用此接口的防火墙规则。 - gw=<GatewayIPv4>
-
Default gateway for IPv4 traffic.
IPv4 流量的默认网关。 - gw6=<GatewayIPv6>
-
Default gateway for IPv6 traffic.
IPv6 流量的默认网关。 - hwaddr=<XX:XX:XX:XX:XX:XX>
-
A common MAC address with the I/G (Individual/Group) bit not set.
一个常见的 MAC 地址,I/G(单播/组播)位未设置。 - ip=<(IPv4/CIDR|dhcp|manual)>
-
IPv4 address in CIDR format.
CIDR 格式的 IPv4 地址。 - ip6=<(IPv6/CIDR|auto|dhcp|manual)>
-
IPv6 address in CIDR format.
CIDR 格式的 IPv6 地址。 - link_down=<boolean>
-
Whether this interface should be disconnected (like pulling the plug).
该接口是否应断开连接(如拔掉插头)。 - mtu=<integer> (64 - 65535)
-
Maximum transfer unit of the interface. (lxc.network.mtu)
接口的最大传输单元。(lxc.network.mtu) - name=<string>
-
Name of the network device as seen from inside the container. (lxc.network.name)
容器内部看到的网络设备名称。(lxc.network.name) - rate=<mbps>
-
Apply rate limiting to the interface
对接口应用速率限制 - tag=<integer> (1 - 4094)
-
VLAN tag for this interface.
该接口的 VLAN 标签。 - trunks=<vlanid[;vlanid...]>
-
VLAN ids to pass through the interface
通过该接口传递的 VLAN ID。 - type=<veth>
-
Network interface type. 网络接口类型。
-
onboot: <boolean> (default = 0)
onboot: <boolean> (默认 = 0) -
Specifies whether a container will be started during system bootup.
指定容器是否在系统启动时启动。 - ostype: <alpine | archlinux | centos | debian | devuan | fedora | gentoo | nixos | opensuse | ubuntu | unmanaged>
-
OS type. This is used to setup configuration inside the container, and corresponds to lxc setup scripts in /usr/share/lxc/config/<ostype>.common.conf. Value unmanaged can be used to skip and OS specific setup.
操作系统类型。用于在容器内设置配置,对应于 /usr/share/lxc/config/<ostype>.common.conf 中的 lxc 设置脚本。值 unmanaged 可用于跳过操作系统特定的设置。 -
protection: <boolean> (default = 0)
protection: <boolean> (默认 = 0) -
Sets the protection flag of the container. This will prevent the CT or CT’s disk remove/update operation.
设置容器的保护标志。这将防止容器或容器的磁盘被删除或更新。 - rootfs: [volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
-
Use volume as container root.
使用卷作为容器根目录。- acl=<boolean>
-
Explicitly enable or disable ACL support.
显式启用或禁用 ACL 支持。 - mountoptions=<opt[;opt...]>
-
Extra mount options for rootfs/mps.
rootfs/mps 的额外挂载选项。 - quota=<boolean>
-
Enable user quotas inside the container (not supported with zfs subvolumes)
启用容器内的用户配额(不支持 zfs 子卷) -
replicate=<boolean> (default = 1)
replicate=<boolean>(默认值 = 1) -
Will include this volume to a storage replica job.
将此卷包含在存储副本任务中。 - ro=<boolean>
-
Read-only mount point 只读挂载点
-
shared=<boolean> (default = 0)
shared=<boolean>(默认值 = 0) -
Mark this non-volume mount point as available on all nodes.
将此非卷挂载点标记为在所有节点上可用。This option does not share the mount point automatically, it assumes it is shared already!
此选项不会自动共享挂载点,它假设挂载点已经被共享! - size=<DiskSize> size=<磁盘大小>
-
Volume size (read only value).
卷大小(只读值)。 - volume=<volume>
-
Volume, device or directory to mount into the container.
要挂载到容器中的卷、设备或目录。
- searchdomain: <string>
-
Sets DNS search domains for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 搜索域。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 -
startup: `[[order=]\d+] [,up=\d+] [,down=\d+] `
启动:`[[order=]\d+] [,up=\d+] [,down=\d+] ` -
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。order 是一个非负数,定义了整体启动顺序。关闭时按相反顺序进行。此外,你可以设置 up 或 down 延迟(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的时间。 -
swap: <integer> (0 - N) (default = 512)
swap: <整数>(0 - N)(默认值 = 512) -
Amount of SWAP for the container in MB.
容器的交换空间大小,单位为 MB。 - tags: <string> 标签:<string>
-
Tags of the Container. This is only meta information.
容器的标签。这只是元信息。 -
template: <boolean> (default = 0)
模板:<boolean>(默认值 = 0) -
Enable/disable Template.
启用/禁用模板。 - timezone: <string> 时区:<字符串>
-
Time zone to use in the container. If option isn’t set, then nothing will be done. Can be set to host to match the host time zone, or an arbitrary time zone option from /usr/share/zoneinfo/zone.tab
容器中使用的时区。如果未设置此选项,则不会进行任何操作。可以设置为 host 以匹配主机时区,或设置为/usr/share/zoneinfo/zone.tab 中的任意时区选项。 -
tty: <integer> (0 - 6) (default = 2)
tty:<整数>(0 - 6)(默认 = 2) -
Specify the number of tty available to the container
指定容器可用的 tty 数量 -
unprivileged: <boolean> (default = 0)
unprivileged: <boolean>(默认 = 0) -
Makes the container run as unprivileged user. (Should not be modified manually.)
使容器以非特权用户身份运行。(不应手动修改。) - unused[n]: [volume=]<volume>
-
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。这在内部使用,不应手动修改。- volume=<volume>
-
The volume that is not used currently.
当前未使用的卷。
11.12. Locks 11.12. 锁定
Container migrations, snapshots and backups (vzdump) set a lock to prevent
incompatible concurrent actions on the affected container. Sometimes you need
to remove such a lock manually (e.g., after a power failure).
容器迁移、快照和备份(vzdump)会设置锁,以防止对受影响容器进行不兼容的并发操作。有时你需要手动移除这样的锁(例如,断电后)。
# pct unlock <CTID>
|
|
Only do this if you are sure the action which set the lock is no
longer running. 只有在确定设置锁的操作已经不再运行时,才执行此操作。 |
12. Software-Defined Network
12. 软件定义网络
The Software-Defined Network (SDN) feature in Proxmox VE enables the
creation of virtual zones and networks (VNets). This functionality simplifies
advanced networking configurations and multitenancy setup.
Proxmox VE 中的软件定义网络(SDN)功能支持创建虚拟区域和网络(VNet)。此功能简化了高级网络配置和多租户设置。
12.1. Introduction 12.1. 介绍
The Proxmox VE SDN allows for separation and fine-grained control of virtual guest
networks, using flexible, software-controlled configurations.
Proxmox VE SDN 允许通过灵活的软件控制配置,实现虚拟客户机网络的分离和细粒度控制。
Separation is managed through zones, virtual networks (VNets), and
subnets. A zone is its own virtually separated network area. A VNet is a
virtual network that belongs to a zone. A subnet is an IP range inside a VNet.
分离通过区域、虚拟网络(VNet)和子网来管理。区域是其自身虚拟分离的网络区域。VNet 是属于某个区域的虚拟网络。子网是 VNet 内的一个 IP 范围。
Depending on the type of the zone, the network behaves differently and offers
specific features, advantages, and limitations.
根据区域的类型,网络表现不同,并提供特定的功能、优势和限制。
Use cases for SDN range from an isolated private network on each individual node
to complex overlay networks across multiple PVE clusters on different locations.
SDN 的使用场景范围从每个单独节点上的隔离私有网络,到跨多个不同地点的 PVE 集群的复杂覆盖网络。
After configuring an VNet in the cluster-wide datacenter SDN administration
interface, it is available as a common Linux bridge, locally on each node, to be
assigned to VMs and Containers.
在集群范围的数据中心 SDN 管理界面配置 VNet 后,它作为一个通用的 Linux 桥接,在每个节点本地可用,可分配给虚拟机和容器。
12.2. Support Status 12.2. 支持状态
12.2.1. History 12.2.1. 历史
The Proxmox VE SDN stack has been available as an experimental feature since 2019 and
has been continuously improved and tested by many developers and users.
With its integration into the web interface in Proxmox VE 6.2, a significant
milestone towards broader integration was achieved.
During the Proxmox VE 7 release cycle, numerous improvements and features were added.
Based on user feedback, it became apparent that the fundamental design choices
and their implementation were quite sound and stable. Consequently, labeling it
as ‘experimental’ did not do justice to the state of the SDN stack.
For Proxmox VE 8, a decision was made to lay the groundwork for full integration of
the SDN feature by elevating the management of networks and interfaces to a core
component in the Proxmox VE access control stack.
In Proxmox VE 8.1, two major milestones were achieved: firstly, DHCP integration was
added to the IP address management (IPAM) feature, and secondly, the SDN
integration is now installed by default.
Proxmox VE SDN 堆栈自 2019 年以来作为实验性功能提供,并且经过了许多开发者和用户的持续改进和测试。随着其在 Proxmox VE 6.2 中集成到网页界面,实现了向更广泛集成迈出的重要里程碑。在 Proxmox VE 7 发布周期中,添加了大量改进和功能。基于用户反馈,显而易见其基本设计选择及其实现相当合理且稳定。因此,将其标记为“实验性”并不能公正反映 SDN 堆栈的状态。对于 Proxmox VE 8,决定通过将网络和接口的管理提升为 Proxmox VE 访问控制堆栈中的核心组件,为 SDN 功能的全面集成奠定基础。在 Proxmox VE 8.1 中,实现了两个重大里程碑:首先,DHCP 集成被添加到 IP 地址管理(IPAM)功能中;其次,SDN 集成现已默认安装。
12.2.2. Current Status 12.2.2. 当前状态
The current support status for the various layers of our SDN installation is as
follows:
我们 SDN 安装各层的当前支持状态如下:
-
Core SDN, which includes VNet management and its integration with the Proxmox VE stack, is fully supported.
核心 SDN,包括 VNet 管理及其与 Proxmox VE 堆栈的集成,已完全支持。 -
IPAM, including DHCP management for virtual guests, is in tech preview.
IPAM,包括虚拟客户机的 DHCP 管理,处于技术预览阶段。 -
Complex routing via FRRouting and controller integration are in tech preview.
通过 FRRouting 的复杂路由和控制器集成处于技术预览阶段。
12.3. Installation 12.3. 安装
12.3.1. SDN Core 12.3.1. SDN 核心
Since Proxmox VE 8.1 the core Software-Defined Network (SDN) packages are installed
by default.
从 Proxmox VE 8.1 开始,核心软件定义网络(SDN)包默认已安装。
If you upgrade from an older version, you need to install the
libpve-network-perl package on every node:
如果您从旧版本升级,需要在每个节点上安装 libpve-network-perl 包:
apt update apt install libpve-network-perl
|
|
Proxmox VE version 7.0 and above have the ifupdown2 package installed by
default. If you originally installed your system with an older version, you need
to explicitly install the ifupdown2 package. Proxmox VE 7.0 及以上版本默认安装了 ifupdown2 包。如果您最初使用较旧版本安装系统,则需要显式安装 ifupdown2 包。 |
After installation, you need to ensure that the following line is present at the
end of the /etc/network/interfaces configuration file on all nodes, so that
the SDN configuration gets included and activated.
安装完成后,您需要确保在所有节点的 /etc/network/interfaces 配置文件末尾存在以下行,以便包含并激活 SDN 配置。
source /etc/network/interfaces.d/*
12.3.2. DHCP IPAM
The DHCP integration into the built-in PVE IP Address Management stack
currently uses dnsmasq for giving out DHCP leases. This is currently opt-in.
内置 PVE IP 地址管理堆栈中的 DHCP 集成目前使用 dnsmasq 来分配 DHCP 租约。此功能目前为可选启用。
To use that feature you need to install the dnsmasq package on every node:
要使用该功能,您需要在每个节点上安装 dnsmasq 包:
apt update apt install dnsmasq # disable default instance systemctl disable --now dnsmasq
12.3.3. FRRouting
The Proxmox VE SDN stack uses the FRRouting project for
advanced setups. This is currently opt-in.
Proxmox VE SDN 堆栈使用 FRRouting 项目来实现高级配置。目前这是可选的。
To use the SDN routing integration you need to install the frr-pythontools
package on all nodes:
要使用 SDN 路由集成,您需要在所有节点上安装 frr-pythontools 包:
apt update apt install frr-pythontools
Then enable the frr service on all nodes:
然后在所有节点上启用 frr 服务:
systemctl enable frr.service
12.4. Configuration Overview
12.4. 配置概述
Configuration is done at the web UI at datacenter level, separated into the
following sections:
配置在数据中心级别的网页用户界面中进行,分为以下几个部分:
-
SDN:: Here you get an overview of the current active SDN state, and you can apply all pending changes to the whole cluster.
SDN:: 在这里您可以查看当前活动的 SDN 状态概览,并且可以将所有待处理的更改应用到整个集群。 -
Zones: Create and manage the virtually separated network zones
区域:创建和管理虚拟分隔的网络区域 -
VNets VNets: Create virtual network bridges and manage subnets
VNets VNets:创建虚拟网络桥接并管理子网
The Options category allows adding and managing additional services to be used
in your SDN setup.
“选项”类别允许添加和管理在您的 SDN 设置中使用的附加服务。
-
Controllers: For controlling layer 3 routing in complex setups
控制器:用于在复杂设置中控制三层路由 -
DHCP: Define a DHCP server for a zone that automatically allocates IPs for guests in the IPAM and leases them to the guests via DHCP.
DHCP:为某个区域定义 DHCP 服务器,自动为 IPAM 中的客户机分配 IP 地址,并通过 DHCP 将其租赁给客户机。 -
IPAM: Enables external for IP address management for guests
IPAM:启用外部 IP 地址管理以管理客户机的 IP 地址 -
DNS: Define a DNS server integration for registering virtual guests' hostname and IP addresses
DNS:定义 DNS 服务器集成,用于注册虚拟客户机的主机名和 IP 地址
12.5. Technology & Configuration
12.5. 技术与配置
The Proxmox VE Software-Defined Network implementation uses standard Linux networking
as much as possible. The reason for this is that modern Linux networking
provides almost all needs for a feature full SDN implementation and avoids adding
external dependencies and reduces the overall amount of components that can
break.
Proxmox VE 软件定义网络的实现尽可能使用标准的 Linux 网络。这样做的原因是现代 Linux 网络几乎满足了功能完善的 SDN 实现的所有需求,避免了添加外部依赖,并减少了可能出错的组件总数。
The Proxmox VE SDN configurations are located in /etc/pve/sdn, which is shared with
all other cluster nodes through the Proxmox VE configuration file system.
Those configurations get translated to the respective configuration formats of
the tools that manage the underlying network stack (for example ifupdown2 or
frr).
Proxmox VE SDN 的配置位于 /etc/pve/sdn,该目录通过 Proxmox VE 配置文件系统与所有其他集群节点共享。这些配置会被转换为管理底层网络栈的工具(例如 ifupdown2 或 frr)所使用的相应配置格式。
New changes are not immediately applied but recorded as pending first. You can
then apply a set of different changes all at once in the main SDN overview
panel on the web interface. This system allows to roll-out various changes as
single atomic one.
新的更改不会立即应用,而是首先记录为待处理状态。然后,您可以在网页界面的主 SDN 概览面板中一次性应用一组不同的更改。该系统允许将各种更改作为单个原子操作进行部署。
The SDN tracks the rolled-out state through the .running-config and .version
files located in /etc/pve/sdn.
SDN 通过位于 /etc/pve/sdn 的 .running-config 和 .version 文件跟踪已部署的状态。
12.6. Zones 12.6. 区域
A zone defines a virtually separated network. Zones are restricted to
specific nodes and assigned permissions, in order to restrict users to a certain
zone and its contained VNets.
一个区域定义了一个虚拟隔离的网络。区域被限制在特定节点和分配的权限内,以限制用户只能访问某个区域及其包含的虚拟网络(VNet)。
Different technologies can be used for separation:
可以使用不同的技术进行隔离:
-
Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
简单:隔离桥。一个简单的三层路由桥(NAT) -
VLAN: Virtual LANs are the classic method of subdividing a LAN
VLAN:虚拟局域网是细分局域网的经典方法 -
QinQ: Stacked VLAN (formally known as IEEE 802.1ad)
QinQ:叠加 VLAN(正式名称为 IEEE 802.1ad) -
VXLAN: Layer 2 VXLAN network via a UDP tunnel
VXLAN:通过 UDP 隧道实现的二层 VXLAN 网络 -
EVPN (BGP EVPN): VXLAN with BGP to establish Layer 3 routing
EVPN(BGP EVPN):使用 BGP 建立三层路由的 VXLAN
12.6.1. Common Options 12.6.1. 常用选项
The following options are available for all zone types:
以下选项适用于所有区域类型:
- Nodes 节点
-
The nodes which the zone and associated VNets should be deployed on.
区域及其关联的虚拟网络应部署的节点。 - IPAM
-
Use an IP Address Management (IPAM) tool to manage IPs in the zone. Optional, defaults to pve.
使用 IP 地址管理(IPAM)工具来管理该区域内的 IP。可选,默认为 pve。 - DNS
-
DNS API server. Optional.
DNS API 服务器。可选。 - ReverseDNS 反向 DNS
-
Reverse DNS API server. Optional.
反向 DNS API 服务器。可选。 - DNSZone DNS 区域
-
DNS domain name. Used to register hostnames, such as <hostname>.<domain>. The DNS zone must already exist on the DNS server. Optional.
DNS 域名。用于注册主机名,例如<hostname>.<domain>。DNS 区域必须已存在于 DNS 服务器上。可选。
12.6.2. Simple Zones 12.6.2. 简单区域
This is the simplest plugin. It will create an isolated VNet bridge. This
bridge is not linked to a physical interface, and VM traffic is only local on
each the node.
It can be used in NAT or routed setups.
这是最简单的插件。它将创建一个隔离的虚拟网络桥接。该桥接不连接到物理接口,虚拟机流量仅在每个节点本地。它可以用于 NAT 或路由设置。
12.6.3. VLAN Zones 12.6.3. VLAN 区域
The VLAN plugin uses an existing local Linux or OVS bridge to connect to the
node’s physical interface. It uses VLAN tagging defined in the VNet to isolate
the network segments. This allows connectivity of VMs between different nodes.
VLAN 插件使用现有的本地 Linux 或 OVS 桥接连接到节点的物理接口。它使用 VNet 中定义的 VLAN 标记来隔离网络段。这允许不同节点之间的虚拟机互联。
VLAN zone configuration options:
VLAN 区域配置选项:
- Bridge 桥接
-
The local bridge or OVS switch, already configured on each node that allows node-to-node connection.
本地桥接或 OVS 交换机,已在每个节点上配置,允许节点间连接。
12.6.4. QinQ Zones 12.6.4. QinQ 区域
QinQ also known as VLAN stacking, that uses multiple layers of VLAN tags for
isolation. The QinQ zone defines the outer VLAN tag (the Service VLAN)
whereas the inner VLAN tag is defined by the VNet.
QinQ 也称为 VLAN 叠加,使用多层 VLAN 标签进行隔离。QinQ 区域定义外层 VLAN 标签(服务 VLAN),而内层 VLAN 标签由虚拟网络(VNet)定义。
|
|
Your physical network switches must support stacked VLANs for this
configuration. 您的物理网络交换机必须支持堆叠 VLAN 才能进行此配置。 |
QinQ zone configuration options:
QinQ 区域配置选项:
- Bridge 桥接
-
A local, VLAN-aware bridge that is already configured on each local node
每个本地节点上已配置的本地、支持 VLAN 的桥接器 - Service VLAN 服务 VLAN
-
The main VLAN tag of this zone
该区域的主要 VLAN 标签 - Service VLAN Protocol 服务 VLAN 协议
-
Allows you to choose between an 802.1q (default) or 802.1ad service VLAN type.
允许您在 802.1q(默认)或 802.1ad 服务 VLAN 类型之间进行选择。 - MTU
-
Due to the double stacking of tags, you need 4 more bytes for QinQ VLANs. For example, you must reduce the MTU to 1496 if you physical interface MTU is 1500.
由于标签的双重叠加,QinQ VLAN 需要额外的 4 个字节。例如,如果你的物理接口 MTU 是 1500,则必须将 MTU 降低到 1496。
12.6.5. VXLAN Zones 12.6.5. VXLAN 区域
The VXLAN plugin establishes a tunnel (overlay) on top of an existing network
(underlay). This encapsulates layer 2 Ethernet frames within layer 4 UDP
datagrams using the default destination port 4789.
VXLAN 插件在现有网络(底层网络)之上建立一个隧道(覆盖网络)。它使用默认目标端口 4789,将二层以太网帧封装在四层 UDP 数据报中。
You have to configure the underlay network yourself to enable UDP connectivity
between all peers.
你必须自行配置底层网络,以实现所有节点之间的 UDP 连接。
You can, for example, create a VXLAN overlay network on top of public internet,
appearing to the VMs as if they share the same local Layer 2 network.
例如,你可以在公共互联网之上创建一个 VXLAN 覆盖网络,使虚拟机看起来像是共享同一个本地二层网络。
|
|
VXLAN on its own does does not provide any encryption. When joining
multiple sites via VXLAN, make sure to establish a secure connection between
the site, for example by using a site-to-site VPN. 单独使用 VXLAN 并不提供任何加密。在通过 VXLAN 连接多个站点时,确保在站点之间建立安全连接,例如使用站点到站点 VPN。 |
VXLAN zone configuration options:
VXLAN 区域配置选项:
- Peers Address List 对等地址列表
-
A list of IP addresses of each node in the VXLAN zone. This can be external nodes reachable at this IP address. All nodes in the cluster need to be mentioned here.
VXLAN 区域中每个节点的 IP 地址列表。这里可以是通过该 IP 地址可访问的外部节点。集群中的所有节点都需要在此处列出。 - MTU
-
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes lower than the outgoing physical interface.
由于 VXLAN 封装使用了 50 字节,MTU 需要比外发物理接口低 50 字节。
12.6.6. EVPN Zones 12.6.6. EVPN 区域
The EVPN zone creates a routable Layer 3 network, capable of spanning across
multiple clusters. This is achieved by establishing a VPN and utilizing BGP as
the routing protocol.
EVPN 区域创建了一个可路由的三层网络,能够跨多个集群扩展。这是通过建立 VPN 并使用 BGP 作为路由协议实现的。
The VNet of EVPN can have an anycast IP address and/or MAC address. The bridge
IP is the same on each node, meaning a virtual guest can use this address as
gateway.
EVPN 的虚拟网络(VNet)可以拥有任播 IP 地址和/或 MAC 地址。桥接 IP 在每个节点上都是相同的,这意味着虚拟客户机可以使用该地址作为网关。
Routing can work across VNets from different zones through a VRF (Virtual
Routing and Forwarding) interface.
路由可以通过 VRF(虚拟路由与转发)接口在不同区域的虚拟网络之间工作。
EVPN zone configuration options:
EVPN 区域配置选项:
- VRF VXLAN ID
-
A VXLAN-ID used for dedicated routing interconnect between VNets. It must be different than the VXLAN-ID of the VNets.
用于 VNets 之间专用路由互联的 VXLAN-ID。它必须与 VNets 的 VXLAN-ID 不同。 - Controller 控制器
-
The EVPN-controller to use for this zone. (See controller plugins section).
此区域使用的 EVPN 控制器。(参见控制器插件部分)。 - VNet MAC Address VNet MAC 地址
-
Anycast MAC address that gets assigned to all VNets in this zone. Will be auto-generated if not defined.
分配给该区域内所有 VNet 的 Anycast MAC 地址。如果未定义,将自动生成。 - Exit Nodes 出口节点
-
Nodes that shall be configured as exit gateways from the EVPN network, through the real network. The configured nodes will announce a default route in the EVPN network. Optional.
应配置为通过真实网络作为 EVPN 网络出口网关的节点。配置的节点将在 EVPN 网络中宣布默认路由。可选。 - Primary Exit Node 主出口节点
-
If you use multiple exit nodes, force traffic through this primary exit node, instead of load-balancing on all nodes. Optional but necessary if you want to use SNAT or if your upstream router doesn’t support ECMP.
如果您使用多个出口节点,请强制流量通过此主出口节点,而不是在所有节点上进行负载均衡。如果您想使用 SNAT 或上游路由器不支持 ECMP,则此项为可选但必要。 -
Exit Nodes Local Routing
出口节点本地路由 -
This is a special option if you need to reach a VM/CT service from an exit node. (By default, the exit nodes only allow forwarding traffic between real network and EVPN network). Optional.
这是一个特殊选项,如果您需要从出口节点访问 VM/CT 服务。(默认情况下,出口节点仅允许在真实网络和 EVPN 网络之间转发流量)。可选。 - Advertise Subnets 发布子网
-
Announce the full subnet in the EVPN network. If you have silent VMs/CTs (for example, if you have multiple IPs and the anycast gateway doesn’t see traffic from these IPs, the IP addresses won’t be able to be reached inside the EVPN network). Optional.
在 EVPN 网络中宣布完整子网。如果您有静默的虚拟机/容器(例如,如果您有多个 IP 且任播网关未看到来自这些 IP 的流量,则这些 IP 地址将无法在 EVPN 网络内被访问)。可选。 -
Disable ARP ND Suppression
禁用 ARP ND 抑制 -
Don’t suppress ARP or ND (Neighbor Discovery) packets. This is required if you use floating IPs in your VMs (IP and MAC addresses are being moved between systems). Optional.
不抑制 ARP 或 ND(邻居发现)数据包。如果您在虚拟机中使用浮动 IP(IP 和 MAC 地址在系统间移动),则需要此设置。可选。 - Route-target Import 路由目标导入
-
Allows you to import a list of external EVPN route targets. Used for cross-DC or different EVPN network interconnects. Optional.
允许您导入一组外部 EVPN 路由目标。用于跨数据中心或不同 EVPN 网络的互联。可选。 - MTU
-
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes less than the maximal MTU of the outgoing physical interface. Optional, defaults to 1450.
由于 VXLAN 封装使用 50 字节,MTU 需要比出接口物理接口的最大 MTU 小 50 字节。可选,默认值为 1450。
12.7. VNets 12.7. 虚拟网络(VNet)
After creating a virtual network (VNet) through the SDN GUI, a local network
interface with the same name is available on each node. To connect a guest to the
VNet, assign the interface to the guest and set the IP address accordingly.
通过 SDN 图形界面创建虚拟网络(VNet)后,每个节点上都会出现一个同名的本地网络接口。要将虚拟机连接到 VNet,需要将该接口分配给虚拟机并相应设置 IP 地址。
Depending on the zone, these options have different meanings and are explained
in the respective zone section in this document.
根据不同的区域,这些选项的含义有所不同,具体说明请参见本文档中相应的区域章节。
|
|
In the current state, some options may have no effect or won’t work in
certain zones. 在当前状态下,某些选项可能无效或在某些区域无法使用。 |
VNet configuration options:
VNet 配置选项:
- ID
-
An up to 8 character ID to identify a VNet
一个最多 8 个字符的 ID,用于识别 VNet - Comment 备注
-
More descriptive identifier. Assigned as an alias on the interface. Optional
更具描述性的标识符。作为接口的别名分配。可选 - Zone 区域
-
The associated zone for this VNet
此虚拟网络关联的区域 - Tag 标签
-
The unique VLAN or VXLAN ID
唯一的 VLAN 或 VXLAN ID - VLAN Aware 支持 VLAN
-
Enables vlan-aware option on the interface, enabling configuration in the guest.
在接口上启用 vlan-aware 选项,允许在客户机中进行配置。 - Isolate Ports 隔离端口
-
Sets the isolated flag for all guest ports of this interface, but not for the interface itself. This means guests can only send traffic to non-isolated bridge-ports, which is the bridge itself. In order for this setting to take effect, you need to restart the affected guest.
为该接口的所有来宾端口设置隔离标志,但不包括接口本身。这意味着来宾只能向非隔离的桥接端口发送流量,即桥接本身。为了使此设置生效,您需要重启受影响的来宾。
|
|
Port isolation is local to each host. Use the
VNET Firewall to further isolate traffic in
the VNET across nodes. For example, DROP by default and only allow traffic from
the IP subnet to the gateway and vice versa. 端口隔离是每个主机本地的。使用 VNET 防火墙进一步隔离跨节点的 VNET 流量。例如,默认 DROP,只允许来自 IP 子网到网关及其反向的流量。 |
12.8. Subnets 12.8. 子网
A subnet define a specific IP range, described by the CIDR network address.
Each VNet, can have one or more subnets.
子网定义了一个特定的 IP 范围,由 CIDR 网络地址描述。每个 VNet 可以有一个或多个子网。
A subnet can be used to:
子网可以用于:
-
Restrict the IP addresses you can define on a specific VNet
限制您可以在特定虚拟网络(VNet)上定义的 IP 地址 -
Assign routes/gateways on a VNet in layer 3 zones
在第 3 层区域的虚拟网络(VNet)上分配路由/网关 -
Enable SNAT on a VNet in layer 3 zones
在第 3 层区域的虚拟网络(VNet)上启用源网络地址转换(SNAT) -
Auto assign IPs on virtual guests (VM or CT) through IPAM plugins
通过 IPAM 插件自动为虚拟客户机(VM 或 CT)分配 IP 地址 -
DNS registration through DNS plugins
通过 DNS 插件进行 DNS 注册
If an IPAM server is associated with the subnet zone, the subnet prefix will be
automatically registered in the IPAM.
如果子网区域关联了 IPAM 服务器,子网前缀将自动注册到 IPAM 中。
Subnet configuration options:
子网配置选项:
- ID
-
A CIDR network address, for example 10.0.0.0/8
CIDR 网络地址,例如 10.0.0.0/8 - Gateway 网关
-
The IP address of the network’s default gateway. On layer 3 zones (Simple/EVPN plugins), it will be deployed on the VNet.
网络默认网关的 IP 地址。在第 3 层区域(Simple/EVPN 插件)中,它将部署在 VNet 上。 - SNAT
-
Enable Source NAT which allows VMs from inside a VNet to connect to the outside network by forwarding the packets to the nodes outgoing interface. On EVPN zones, forwarding is done on EVPN gateway-nodes. Optional.
启用源地址转换(Source NAT),允许虚拟机从虚拟网络(VNet)内部连接到外部网络,通过将数据包转发到节点的外发接口。在 EVPN 区域,转发在 EVPN 网关节点上进行。可选。 - DNS Zone Prefix DNS 区域前缀
-
Add a prefix to the domain registration, like <hostname>.prefix.<domain> Optional.
为域名注册添加前缀,例如 <hostname>.prefix.<domain>。可选。
12.9. Controllers 12.9. 控制器
Some zones implement a separated control and data plane that require an external
controller to manage the VNet’s control plane.
某些区域实现了分离的控制平面和数据平面,需要外部控制器来管理虚拟网络(VNet)的控制平面。
Currently, only the EVPN zone requires an external controller.
目前,只有 EVPN 区域需要外部控制器。
12.9.1. EVPN Controller 12.9.1. EVPN 控制器
The EVPN, zone requires an external controller to manage the control plane.
The EVPN controller plugin configures the Free Range Routing (frr) router.
EVPN 区域需要一个外部控制器来管理控制平面。EVPN 控制器插件配置 Free Range Routing (frr) 路由器。
To enable the EVPN controller, you need to enable FRR on every node, see
install FRRouting.
要启用 EVPN 控制器,您需要在每个节点上启用 FRR,参见安装 FRRouting。
EVPN controller configuration options:
EVPN 控制器配置选项:
- ASN #
-
A unique BGP ASN number. It’s highly recommended to use a private ASN number (64512 – 65534, 4200000000 – 4294967294), as otherwise you could end up breaking global routing by mistake.
一个唯一的 BGP ASN 号码。强烈建议使用私有 ASN 号码(64512 – 65534,4200000000 – 4294967294),否则可能会意外破坏全球路由。 - Peers 对等体
-
An IP list of all nodes that are part of the EVPN zone. (could also be external nodes or route reflector servers)
属于 EVPN 区域的所有节点的 IP 列表。(也可以是外部节点或路由反射服务器)
12.9.2. BGP Controller 12.9.2. BGP 控制器
The BGP controller is not used directly by a zone.
You can use it to configure FRR to manage BGP peers.
BGP 控制器不会被区域直接使用。您可以使用它来配置 FRR 以管理 BGP 对等体。
For BGP-EVPN, it can be used to define a different ASN by node, so doing EBGP.
It can also be used to export EVPN routes to an external BGP peer.
对于 BGP-EVPN,它可以用来按节点定义不同的 ASN,从而实现 EBGP。它还可以用来将 EVPN 路由导出到外部 BGP 对等体。
|
|
By default, for a simple full mesh EVPN, you don’t need to define a BGP
controller. 默认情况下,对于简单的全网状 EVPN,您无需定义 BGP 控制器。 |
BGP controller configuration options:
BGP 控制器配置选项:
- Node 节点
-
The node of this BGP controller
此 BGP 控制器的节点 - ASN # ASN 编号
-
A unique BGP ASN number. It’s highly recommended to use a private ASN number in the range (64512 - 65534) or (4200000000 - 4294967294), as otherwise you could break global routing by mistake.
唯一的 BGP ASN 编号。强烈建议使用私有 ASN 编号,范围为(64512 - 65534)或(4200000000 - 4294967294),否则可能会意外破坏全球路由。 - Peer 对等体
-
A list of peer IP addresses you want to communicate with using the underlying BGP network.
您希望通过底层 BGP 网络进行通信的对等体 IP 地址列表。 - EBGP
-
If your peer’s remote-AS is different, this enables EBGP.
如果您的对等体的远程 AS 不同,则启用 EBGP。 - Loopback Interface 回环接口
-
Use a loopback or dummy interface as the source of the EVPN network (for multipath).
使用回环接口或虚拟接口作为 EVPN 网络的源(用于多路径)。 - ebgp-mutltihop ebgp-multihop
-
Increase the number of hops to reach peers, in case they are not directly connected or they use loopback.
增加到达对等体的跳数,以防它们未直接连接或使用回环接口。 - bgp-multipath-as-path-relax
-
Allow ECMP if your peers have different ASN.
如果对等体具有不同的 ASN,则允许 ECMP。
12.9.3. ISIS Controller 12.9.3. ISIS 控制器
The ISIS controller is not used directly by a zone.
You can use it to configure FRR to export EVPN routes to an ISIS domain.
ISIS 控制器不会被区域直接使用。您可以使用它来配置 FRR,将 EVPN 路由导出到 ISIS 域。
ISIS controller configuration options:
ISIS 控制器配置选项:
- Node 节点
-
The node of this ISIS controller.
此 ISIS 控制器的节点。 - Domain 域
-
A unique ISIS domain.
唯一的 ISIS 域。 - Network Entity Title 网络实体标题
-
A Unique ISIS network address that identifies this node.
唯一的 ISIS 网络地址,用于标识此节点。 - Interfaces 接口
-
A list of physical interface(s) used by ISIS.
ISIS 使用的物理接口列表。 - Loopback 回环接口
-
Use a loopback or dummy interface as the source of the EVPN network (for multipath).
使用回环接口或虚拟接口作为 EVPN 网络的源(用于多路径)。
12.10. IPAM
IP Address Management (IPAM) tools manage the IP addresses of clients on the
network. SDN in Proxmox VE uses IPAM for example to find free IP addresses for new
guests.
IP 地址管理(IPAM)工具管理网络中客户端的 IP 地址。Proxmox VE 中的 SDN 例如使用 IPAM 来查找新客户机的空闲 IP 地址。
A single IPAM instance can be associated with one or more zones.
单个 IPAM 实例可以关联一个或多个区域。
12.10.1. PVE IPAM Plugin
12.10.1. PVE IPAM 插件
The default built-in IPAM for your Proxmox VE cluster.
Proxmox VE 集群的默认内置 IPAM。
You can inspect the current status of the PVE IPAM Plugin via the IPAM panel in
the SDN section of the datacenter configuration. This UI can be used to create,
update and delete IP mappings. This is particularly convenient in conjunction
with the DHCP feature.
您可以通过数据中心配置的 SDN 部分中的 IPAM 面板检查当前 PVE IPAM 插件的状态。该界面可用于创建、更新和删除 IP 映射。结合 DHCP 功能使用时尤其方便。
If you are using DHCP, you can use the IPAM panel to create or edit leases for
specific VMs, which enables you to change the IPs allocated via DHCP. When
editing an IP of a VM that is using DHCP you must make sure to force the guest
to acquire a new DHCP leases. This can usually be done by reloading the network
stack of the guest or rebooting it.
如果您使用 DHCP,可以通过 IPAM 面板为特定虚拟机创建或编辑租约,从而更改通过 DHCP 分配的 IP 地址。在编辑使用 DHCP 的虚拟机的 IP 时,必须确保强制客户机获取新的 DHCP 租约。通常可以通过重新加载客户机的网络堆栈或重启客户机来实现。
12.10.2. NetBox IPAM Plugin
12.10.2. NetBox IPAM 插件
NetBox is an open-source IP
Address Management (IPAM) and datacenter infrastructure management (DCIM) tool.
NetBox 是一个开源的 IP 地址管理(IPAM)和数据中心基础设施管理(DCIM)工具。
To integrate NetBox with Proxmox VE SDN, create an API token in NetBox as described
here: https://docs.netbox.dev/en/stable/integrations/rest-api/#tokens
要将 NetBox 与 Proxmox VE SDN 集成,请按照此处描述的方法在 NetBox 中创建一个 API 代币:https://docs.netbox.dev/en/stable/integrations/rest-api/#tokens
The NetBox configuration properties are:
NetBox 的配置属性为:
- URL
-
The NetBox REST API endpoint: http://yournetbox.domain.com/api
NetBox REST API 端点:http://yournetbox.domain.com/api - Token 代币
-
An API access token
一个 API 访问代币
12.10.3. phpIPAM Plugin 12.10.3. phpIPAM 插件
In phpIPAM you need to create an "application" and add
an API token with admin privileges to the application.
在 phpIPAM 中,您需要创建一个“应用程序”,并为该应用程序添加具有管理员权限的 API 代币。
The phpIPAM configuration properties are:
phpIPAM 配置属性为:
- URL
-
The REST-API endpoint: http://phpipam.domain.com/api/<appname>/
REST-API 端点:http://phpipam.domain.com/api/<appname>/ - Token
-
An API access token
一个 API 访问代币 - Section 章节
-
An integer ID. Sections are a group of subnets in phpIPAM. Default installations use sectionid=1 for customers.
一个整数 ID。章节是在 phpIPAM 中一组子网。默认安装使用 sectionid=1 表示客户。
12.11. DNS
The DNS plugin in Proxmox VE SDN is used to define a DNS API server for registration
of your hostname and IP address. A DNS configuration is associated with one or
more zones, to provide DNS registration for all the subnet IPs configured for
a zone.
Proxmox VE SDN 中的 DNS 插件用于定义 DNS API 服务器,以注册您的主机名和 IP 地址。DNS 配置与一个或多个区域相关联,为配置在某个区域的所有子网 IP 提供 DNS 注册。
12.11.1. PowerDNS Plugin
12.11.1. PowerDNS 插件
You need to enable the web server and the API in your PowerDNS config:
您需要在 PowerDNS 配置中启用 Web 服务器和 API:
api=yes api-key=arandomgeneratedstring webserver=yes webserver-port=8081
The PowerDNS configuration options are:
PowerDNS 的配置选项有:
- url
-
The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
REST API 端点:http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost - key
-
An API access key
一个 API 访问密钥 - ttl
-
The default TTL for records
记录的默认 TTL
12.12. DHCP
The DHCP plugin in Proxmox VE SDN can be used to automatically deploy a DHCP server
for a Zone. It provides DHCP for all Subnets in a Zone that have a DHCP range
configured. Currently the only available backend plugin for DHCP is the dnsmasq
plugin.
Proxmox VE SDN 中的 DHCP 插件可用于自动部署一个区域的 DHCP 服务器。它为该区域中所有配置了 DHCP 范围的子网提供 DHCP 服务。目前,DHCP 唯一可用的后端插件是 dnsmasq 插件。
The DHCP plugin works by allocating an IP in the IPAM plugin configured in the
Zone when adding a new network interface to a VM/CT. You can find more
information on how to configure an IPAM in the
respective section of our documentation.
DHCP 插件通过在添加新的网络接口到虚拟机/容器时,在区域中配置的 IPAM 插件中分配 IP 地址。您可以在我们文档的相应章节中找到有关如何配置 IPAM 的更多信息。
When the VM starts, a mapping for the MAC address and IP gets created in the DHCP
plugin of the zone. When the network interfaces is removed or the VM/CT are
destroyed, then the entry in the IPAM and the DHCP server are deleted as well.
当虚拟机启动时,会在该区域的 DHCP 插件中为 MAC 地址和 IP 创建映射。当网络接口被移除或虚拟机/容器被销毁时,IPAM 和 DHCP 服务器中的条目也会被删除。
|
|
Some features (adding/editing/removing IP mappings) are currently only
available when using the PVE IPAM plugin. 某些功能(添加/编辑/删除 IP 映射)目前仅在使用 PVE IPAM 插件时可用。 |
12.12.1. Configuration 12.12.1. 配置
You can enable automatic DHCP for a zone in the Web UI via the Zones panel and
enabling DHCP in the advanced options of a zone.
您可以通过 Web UI 的 Zones 面板启用某个区域的自动 DHCP,并在该区域的高级选项中启用 DHCP。
|
|
Currently only Simple Zones have support for automatic DHCP 目前只有简单区域(Simple Zones)支持自动 DHCP。 |
After automatic DHCP has been enabled for a Zone, DHCP Ranges need to be
configured for the subnets in a Zone. In order to that, go to the Vnets panel and
select the Subnet for which you want to configure DHCP ranges. In the edit
dialogue you can configure DHCP ranges in the respective Tab. Alternatively you
can set DHCP ranges for a Subnet via the following CLI command:
在为某个区域启用自动 DHCP 后,需要为该区域内的子网配置 DHCP 范围。为此,请进入 Vnets 面板,选择要配置 DHCP 范围的子网。在编辑对话框中,您可以在相应的标签页中配置 DHCP 范围。或者,您也可以通过以下 CLI 命令为子网设置 DHCP 范围:
pvesh set /cluster/sdn/vnets/<vnet>/subnets/<subnet> -dhcp-range start-address=10.0.1.100,end-address=10.0.1.200 -dhcp-range start-address=10.0.2.100,end-address=10.0.2.200
You also need to have a gateway configured for the subnet - otherwise
automatic DHCP will not work.
您还需要为子网配置网关,否则自动 DHCP 将无法工作。
The DHCP plugin will then allocate IPs in the IPAM only in the configured
ranges.
DHCP 插件随后只会在 IPAM 中配置的范围内分配 IP。
Do not forget to follow the installation steps for the
dnsmasq DHCP plugin as well.
不要忘记也要遵循 dnsmasq DHCP 插件的安装步骤。
12.12.2. Plugins 12.12.2. 插件
Dnsmasq Plugin Dnsmasq 插件
Currently this is the only DHCP plugin and therefore the plugin that gets used
when you enable DHCP for a zone.
目前这是唯一的 DHCP 插件,因此当您为一个区域启用 DHCP 时,将使用此插件。
For installation see the DHCP IPAM section.
有关安装,请参见 DHCP IPAM 部分。
The plugin will create a new systemd service for each zone that dnsmasq gets
deployed to. The name for the service is dnsmasq@<zone>. The lifecycle of this
service is managed by the DHCP plugin.
该插件将在每个部署了 dnsmasq 的区域创建一个新的 systemd 服务。该服务的名称为 dnsmasq@<zone>。该服务的生命周期由 DHCP 插件管理。
The plugin automatically generates the following configuration files in the
folder /etc/dnsmasq.d/<zone>:
该插件会自动在文件夹 /etc/dnsmasq.d/<zone> 中生成以下配置文件:
- 00-default.conf
-
This contains the default global configuration for a dnsmasq instance.
该文件包含 dnsmasq 实例的默认全局配置。 - 10-<zone>-<subnet_cidr>.conf
-
This file configures specific options for a subnet, such as the DNS server that should get configured via DHCP.
该文件配置子网的特定选项,例如应通过 DHCP 配置的 DNS 服务器。 - 10-<zone>-<subnet_cidr>.ranges.conf
-
This file configures the DHCP ranges for the dnsmasq instance.
该文件配置 dnsmasq 实例的 DHCP 地址范围。 - ethers
-
This file contains the MAC-address and IP mappings from the IPAM plugin. In order to override those mappings, please use the respective IPAM plugin rather than editing this file, as it will get overwritten by the dnsmasq plugin.
此文件包含来自 IPAM 插件的 MAC 地址和 IP 映射。若要覆盖这些映射,请使用相应的 IPAM 插件,而不是编辑此文件,因为该文件会被 dnsmasq 插件覆盖。
You must not edit any of the above files, since they are managed by the DHCP
plugin. In order to customize the dnsmasq configuration you can create
additional files (e.g. 90-custom.conf) in the configuration folder - they will
not get changed by the dnsmasq DHCP plugin.
您不得编辑上述任何文件,因为它们由 DHCP 插件管理。若要自定义 dnsmasq 配置,您可以在配置文件夹中创建额外的文件(例如 90-custom.conf)——这些文件不会被 dnsmasq DHCP 插件更改。
Configuration files are read in order, so you can control the order of the
configuration directives by naming your custom configuration files appropriately.
配置文件按顺序读取,因此您可以通过适当命名自定义配置文件来控制配置指令的顺序。
DHCP leases are stored in the file /var/lib/misc/dnsmasq.<zone>.leases.
DHCP 租约存储在文件 /var/lib/misc/dnsmasq.<zone>.leases 中。
When using the PVE IPAM plugin, you can update, create and delete DHCP leases.
For more information please consult the documentation of
the PVE IPAM plugin. Changing DHCP leases is
currently not supported for the other IPAM plugins.
使用 PVE IPAM 插件时,您可以更新、创建和删除 DHCP 租约。更多信息请查阅 PVE IPAM 插件的文档。目前,其他 IPAM 插件不支持更改 DHCP 租约。
12.13. Firewall Integration
12.13. 防火墙集成
SDN integrates with the Proxmox VE firewall by automatically generating IPSets
which can then be referenced in the source / destination fields of firewall
rules. This happens automatically for VNets and IPAM entries.
SDN 通过自动生成 IPSet 与 Proxmox VE 防火墙集成,这些 IPSet 可以在防火墙规则的源/目标字段中引用。此操作会自动针对 VNet 和 IPAM 条目进行。
12.13.1. VNets and Subnets
12.13.1. 虚拟网络(VNet)和子网
The firewall automatically generates the following IPSets in the SDN scope for
every VNet:
防火墙会在 SDN 范围内为每个虚拟网络自动生成以下 IPSet:
- vnet-all
-
Contains the CIDRs of all subnets in a VNet
包含虚拟网络中所有子网的 CIDR 地址段 - vnet-gateway
-
Contains the IPs of the gateways of all subnets in a VNet
包含 VNet 中所有子网的网关 IP - vnet-no-gateway
-
Contains the CIDRs of all subnets in a VNet, but excludes the gateways
包含 VNet 中所有子网的 CIDR,但不包括网关 - vnet-dhcp
-
Contains all DHCP ranges configured in the subnets in a VNet
包含 VNet 中子网配置的所有 DHCP 范围
When making changes to your configuration, the IPSets update automatically, so
you do not have to update your firewall rules when changing the configuration of
your Subnets.
当您更改配置时,IPSets 会自动更新,因此在更改子网配置时,无需更新防火墙规则。
Simple Zone Example 简单区域示例
Assuming the configuration below for a VNet and its contained subnets:
假设以下是一个虚拟网络(VNet)及其包含子网的配置:
# /etc/pve/sdn/vnets.cfg
vnet: vnet0
zone simple
# /etc/pve/sdn/subnets.cfg
subnet: simple-192.0.2.0-24
vnet vnet0
dhcp-range start-address=192.0.2.100,end-address=192.0.2.199
gateway 192.0.2.1
subnet: simple-2001:db8::-64
vnet vnet0
dhcp-range start-address=2001:db8::1000,end-address=2001:db8::1999
gateway 2001:db8::1
In this example we configured an IPv4 subnet in the VNet vnet0, with
192.0.2.0/24 as its IP Range, 192.0.2.1 as the gateway and the DHCP range is
192.0.2.100 - 192.0.2.199.
在此示例中,我们在虚拟网络 vnet0 中配置了一个 IPv4 子网,IP 范围为 192.0.2.0/24,网关为 192.0.2.1,DHCP 范围为 192.0.2.100 - 192.0.2.199。
Additionally we configured an IPv6 subnet with 2001:db8::/64 as the IP range,
2001:db8::1 as the gateway and a DHCP range of 2001:db8::1000 -
2001:db8::1999.
此外,我们配置了一个 IPv6 子网,IP 范围为 2001:db8::/64,网关为 2001:db8::1,DHCP 范围为 2001:db8::1000 - 2001:db8::1999。
The respective auto-generated IPsets for vnet0 would then contain the following
elements:
vnet0 的相应自动生成的 IP 集合将包含以下元素:
- vnet0-all
-
-
192.0.2.0/24
-
2001:db8::/64
-
- vnet0-gateway
-
-
192.0.2.1
-
2001:db8::1
-
- vnet0-no-gateway vnet0-无网关
-
-
192.0.2.0/24
-
2001:db8::/64
-
!192.0.2.1
-
!2001:db8::1
-
- vnet0-dhcp
-
-
192.0.2.100 - 192.0.2.199
-
2001:db8::1000 - 2001:db8::1999
-
12.13.2. IPAM
If you are using the built-in PVE IPAM, then the firewall automatically
generates an IPset for every guest that has entries in the IPAM. The respective
IPset for a guest with ID 100 would be guest-ipam-100. It contains all IP
addresses from all IPAM entries. So if guest 100 is member of multiple VNets,
then the IPset would contain the IPs from all VNets.
如果您使用内置的 PVE IPAM,那么防火墙会自动为每个在 IPAM 中有条目的客户机生成一个 IPset。ID 为 100 的客户机对应的 IPset 名称为 guest-ipam-100。它包含所有 IPAM 条目中的所有 IP 地址。因此,如果客户机 100 是多个 VNet 的成员,那么该 IPset 将包含所有 VNet 的 IP。
When entries get added / updated / deleted, then the respective IPSets will be
updated accordingly.
当条目被添加/更新/删除时,相应的 IPset 也会相应更新。
|
|
When removing all entries for a guest and there are firewall rules
still referencing the auto-generated IPSet then the firewall will fail to update
the ruleset, since it references a non-existing IPSet. 当删除某个客户机的所有条目时,如果防火墙规则仍然引用自动生成的 IPSet,则防火墙将无法更新规则集,因为它引用了一个不存在的 IPSet。 |
12.14. Examples 12.14. 示例
This section presents multiple configuration examples tailored for common SDN
use cases. It aims to offer tangible implementations, providing additional
details to enhance comprehension of the available configuration options.
本节展示了多个针对常见 SDN 用例的配置示例。旨在提供具体的实现方案,附加详细信息以增强对可用配置选项的理解。
12.14.1. Simple Zone Example
12.14.1. 简单区域示例
Simple zone networks create an isolated network for guests on a single host to
connect to each other.
简单区域网络为单个主机上的客户机创建一个隔离的网络,使它们能够相互连接。
|
|
connection between guests are possible if all guests reside on a same host
but cannot be reached on other nodes. 如果所有客户机都位于同一主机上,则客户机之间可以相互连接,但无法在其他节点上访问。 |
-
Create a simple zone named simple.
创建一个名为 simple 的简单区域。 -
Add a VNet names vnet1.
添加一个名为 vnet1 的虚拟网络(VNet)。 -
Create a Subnet with a gateway and the SNAT option enabled.
创建一个带有网关并启用 SNAT 选项的子网。 -
This creates a network bridge vnet1 on the node. Assign this bridge to the guests that shall join the network and configure an IP address.
这将在节点上创建一个名为 vnet1 的网络桥。将此桥分配给需要加入网络的虚拟机,并配置 IP 地址。
The network interface configuration in two VMs may look like this which allows
them to communicate via the 10.0.1.0/24 network.
两个虚拟机的网络接口配置可能如下所示,这使它们能够通过 10.0.1.0/24 网络进行通信。
allow-hotplug ens19
iface ens19 inet static
address 10.0.1.14/24
allow-hotplug ens19
iface ens19 inet static
address 10.0.1.15/24
12.14.2. Source NAT Example
12.14.2. 源地址转换示例
If you want to allow outgoing connections for guests in the simple network zone
the simple zone offers a Source NAT (SNAT) option.
如果您想允许简单网络区域内的客户机发起外部连接,简单区域提供了源地址转换(SNAT)选项。
Starting from the configuration above, Add a
Subnet to the VNet vnet1, set a gateway IP and enable the SNAT option.
基于上述配置,向虚拟网络 vnet1 添加一个子网,设置网关 IP 并启用 SNAT 选项。
Subnet: 172.16.0.0/24 Gateway: 172.16.0.1 SNAT: checked
In the guests configure the static IP address inside the subnet’s IP range.
在客户机中配置子网 IP 范围内的静态 IP 地址。
The node itself will join this network with the Gateway IP 172.16.0.1 and
function as the NAT gateway for guests within the subnet range.
节点本身将以网关 IP 172.16.0.1 加入该网络,并作为子网范围内客户机的 NAT 网关。
12.14.3. VLAN Setup Example
12.14.3. VLAN 设置示例
When VMs on different nodes need to communicate through an isolated network, the
VLAN zone allows network level isolation using VLAN tags.
当不同节点上的虚拟机需要通过隔离网络进行通信时,VLAN 区域允许使用 VLAN 标签实现网络级别的隔离。
Create a VLAN zone named myvlanzone:
创建一个名为 myvlanzone 的 VLAN 区域:
ID: myvlanzone Bridge: vmbr0
Create a VNet named myvnet1 with VLAN tag 10 and the previously created
myvlanzone.
创建一个名为 myvnet1 的虚拟网络,使用 VLAN 标签 10 和之前创建的 myvlanzone。
ID: myvnet1 Zone: myvlanzone Tag: 10
Apply the configuration through the main SDN panel, to create VNets locally on
each node.
通过主 SDN 面板应用配置,在每个节点本地创建虚拟网络(VNet)。
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on myvnet1.
在节点 1 上创建一个基于 Debian 的虚拟机(vm1),并在 myvnet1 上配置一个虚拟网卡(vNIC)。
Use the following network configuration for this VM:
为该虚拟机使用以下网络配置:
auto eth0
iface eth0 inet static
address 10.0.3.100/24
Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet
myvnet1 as vm1.
在节点 2 上创建第二个虚拟机(vm2),并在与 vm1 相同的虚拟网络 myvnet1 上配置一个虚拟网卡(vNIC)。
Use the following network configuration for this VM:
为此虚拟机使用以下网络配置:
auto eth0
iface eth0 inet static
address 10.0.3.101/24
Following this, you should be able to ping between both VMs using that network.
完成后,您应该能够使用该网络在两个虚拟机之间进行 ping 操作。
12.14.4. QinQ Setup Example
12.14.4. QinQ 设置示例
This example configures two QinQ zones and adds two VMs to each zone to
demonstrate the additional layer of VLAN tags which allows the configuration of
more isolated VLANs.
本示例配置了两个 QinQ 区域,并向每个区域添加了两个虚拟机,以演示额外的 VLAN 标签层,这使得配置更多隔离的 VLAN 成为可能。
A typical use case for this configuration is a hosting provider that provides an
isolated network to customers for VM communication but isolates the VMs from
other customers.
这种配置的典型用例是托管服务提供商为客户提供一个隔离的网络,用于虚拟机之间的通信,同时将虚拟机与其他客户隔离开。
Create a QinQ zone named qinqzone1 with service VLAN 20
创建一个名为 qinqzone1 的 QinQ 区域,服务 VLAN 为 20
ID: qinqzone1 Bridge: vmbr0 Service VLAN: 20
Create another QinQ zone named qinqzone2 with service VLAN 30
创建另一个名为 qinqzone2 的 QinQ 区域,服务 VLAN 为 30
ID: qinqzone2 Bridge: vmbr0 Service VLAN: 30
Create a VNet named myvnet1 with VLAN-ID 100 on the previously created
qinqzone1 zone.
在之前创建的 qinqzone1 区域上,创建一个名为 myvnet1 的虚拟网络,VLAN-ID 为 100。
ID: qinqvnet1 Zone: qinqzone1 Tag: 100
Create a myvnet2 with VLAN-ID 100 on the qinqzone2 zone.
在 qinqzone2 区域上创建一个 VLAN-ID 为 100 的 myvnet2。
ID: qinqvnet2 Zone: qinqzone2 Tag: 100
Apply the configuration on the main SDN web interface panel to create VNets
locally on each node.
在主 SDN 网络界面面板上应用配置,以在每个节点本地创建虚拟网络(VNet)。
Create four Debian-bases virtual machines (vm1, vm2, vm3, vm4) and add network
interfaces to vm1 and vm2 with bridge qinqvnet1 and vm3 and vm4 with bridge
qinqvnet2.
创建四个基于 Debian 的虚拟机(vm1、vm2、vm3、vm4),并为 vm1 和 vm2 添加连接到跨链桥 qinqvnet1 的网络接口,为 vm3 和 vm4 添加连接到跨链桥 qinqvnet2 的网络接口。
Inside the VM, configure the IP addresses of the interfaces, for example via
/etc/network/interfaces:
在虚拟机内部配置接口的 IP 地址,例如通过 /etc/network/interfaces:
auto eth0
iface eth0 inet static
address 10.0.3.101/24
Configure all four VMs to have IP addresses from the 10.0.3.101 to
10.0.3.104 range.
将所有四台虚拟机配置为使用 10.0.3.101 到 10.0.3.104 范围内的 IP 地址。
Now you should be able to ping between the VMs vm1 and vm2, as well as
between vm3 and vm4. However, neither of VMs vm1 or vm2 can ping VMs
vm3 or vm4, as they are on a different zone with a different service-VLAN.
现在你应该能够在虚拟机 vm1 和 vm2 之间,以及 vm3 和 vm4 之间进行 ping 操作。然而,虚拟机 vm1 或 vm2 都无法 ping 通虚拟机 vm3 或 vm4,因为它们位于不同的区域,使用不同的服务 VLAN。
12.14.5. VXLAN Setup Example
12.14.5. VXLAN 设置示例
The example assumes a cluster with three nodes, with the node IP addresses
192.168.0.1, 192.168.0.2 and 192.168.0.3.
该示例假设一个包含三个节点的集群,节点 IP 地址分别为 192.168.0.1、192.168.0.2 和 192.168.0.3。
Create a VXLAN zone named myvxlanzone and add all IPs from the nodes to the
peer address list. Use the default MTU of 1450 or configure accordingly.
创建一个名为 myvxlanzone 的 VXLAN 区域,并将所有节点的 IP 添加到对等地址列表中。使用默认的 MTU 1450 或根据需要进行配置。
ID: myvxlanzone Peers Address List: 192.168.0.1,192.168.0.2,192.168.0.3
Create a VNet named vxvnet1 using the VXLAN zone myvxlanzone created
previously.
使用之前创建的 VXLAN 区域 myvxlanzone 创建一个名为 vxvnet1 的虚拟网络(VNet)。
ID: vxvnet1 Zone: myvxlanzone Tag: 100000
Apply the configuration on the main SDN web interface panel to create VNets
locally on each nodes.
在主 SDN 网页界面面板上应用配置,以在每个节点本地创建虚拟网络(VNet)。
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on vxvnet1.
在 node1 上创建一个基于 Debian 的虚拟机(vm1),并在 vxvnet1 上配置一个虚拟网卡(vNIC)。
Use the following network configuration for this VM (note the lower MTU).
为此虚拟机使用以下网络配置(注意较低的 MTU)。
auto eth0
iface eth0 inet static
address 10.0.3.100/24
mtu 1450
Create a second virtual machine (vm2) on node3, with a vNIC on the same VNet
vxvnet1 as vm1.
在 node3 上创建第二台虚拟机(vm2),其虚拟网卡连接到与 vm1 相同的虚拟网络 vxvnet1。
Use the following network configuration for this VM:
为此虚拟机使用以下网络配置:
auto eth0
iface eth0 inet static
address 10.0.3.101/24
mtu 1450
Then, you should be able to ping between between vm1 and vm2.
然后,你应该能够在 vm1 和 vm2 之间进行 ping 操作。
12.14.6. EVPN Setup Example
12.14.6. EVPN 设置示例
The example assumes a cluster with three nodes (node1, node2, node3) with IP
addresses 192.168.0.1, 192.168.0.2 and 192.168.0.3.
该示例假设一个包含三个节点(node1、node2、node3)的集群,IP 地址分别为 192.168.0.1、192.168.0.2 和 192.168.0.3。
Create an EVPN controller, using a private ASN number and the above node
addresses as peers.
创建一个 EVPN 控制器,使用私有 ASN 号,并将上述节点地址作为对等体。
ID: myevpnctl ASN#: 65000 Peers: 192.168.0.1,192.168.0.2,192.168.0.3
Create an EVPN zone named myevpnzone, assign the previously created
EVPN-controller and define node1 and node2 as exit nodes.
创建一个名为 myevpnzone 的 EVPN 区域,分配之前创建的 EVPN 控制器,并将 node1 和 node2 定义为出口节点。
ID: myevpnzone VRF VXLAN Tag: 10000 Controller: myevpnctl MTU: 1450 VNet MAC Address: 32:F4:05:FE:6C:0A Exit Nodes: node1,node2
Create the first VNet named myvnet1 using the EVPN zone myevpnzone.
使用 EVPN 区域 myevpnzone 创建第一个名为 myvnet1 的虚拟网络。
ID: myvnet1 Zone: myevpnzone Tag: 11000
Create a subnet on myvnet1:
在 myvnet1 上创建一个子网:
Subnet: 10.0.1.0/24 Gateway: 10.0.1.1
Create the second VNet named myvnet2 using the same EVPN zone myevpnzone.
使用相同的 EVPN 区域 myevpnzone 创建第二个名为 myvnet2 的虚拟网络。
ID: myvnet2 Zone: myevpnzone Tag: 12000
Create a different subnet on myvnet2`:
在 myvnet2 上创建一个不同的子网:
Subnet: 10.0.2.0/24 Gateway: 10.0.2.1
Apply the configuration from the main SDN web interface panel to create VNets
locally on each node and generate the FRR configuration.
从主 SDN 网页界面面板应用配置,在每个节点本地创建虚拟网络(VNet)并生成 FRR 配置。
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on myvnet1.
在节点 1 上创建一个基于 Debian 的虚拟机(vm1),并在 myvnet1 上配置一个虚拟网卡(vNIC)。
Use the following network configuration for vm1:
为 vm1 使用以下网络配置:
auto eth0
iface eth0 inet static
address 10.0.1.100/24
gateway 10.0.1.1
mtu 1450
Create a second virtual machine (vm2) on node2, with a vNIC on the other VNet
myvnet2.
在节点 2 上创建第二个虚拟机(vm2),并在另一个虚拟网络 myvnet2 上配置一个虚拟网卡(vNIC)。
Use the following network configuration for vm2:
对 vm2 使用以下网络配置:
auto eth0
iface eth0 inet static
address 10.0.2.100/24
gateway 10.0.2.1
mtu 1450
Now you should be able to ping vm2 from vm1, and vm1 from vm2.
现在你应该能够从 vm1 ping 通 vm2,也能从 vm2 ping 通 vm1。
If you ping an external IP from vm2 on the non-gateway node3, the packet
will go to the configured myvnet2 gateway, then will be routed to the exit
nodes (node1 or node2) and from there it will leave those nodes over the
default gateway configured on node1 or node2.
如果你从非网关节点 node3 上的 vm2 ping 一个外部 IP,数据包将会发送到配置的 myvnet2 网关,然后被路由到出口节点(node1 或 node2),接着通过 node1 或 node2 上配置的默认网关离开这些节点。
|
|
You need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24
networks to node1 and node2 on your external gateway, so that the public network
can reply back. 你需要在外部网关上为 10.0.1.0/24 和 10.0.2.0/24 网络添加到 node1 和 node2 的反向路由,以便公共网络能够回复。 |
If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
and 10.0.2.0/24 in this example), will be announced dynamically.
如果您配置了外部 BGP 路由器,BGP-EVPN 路由(本例中为 10.0.1.0/24 和 10.0.2.0/24)将会被动态宣布。
12.15. Notes 12.15. 注意事项
12.15.1. Multiple EVPN Exit Nodes
12.15.1. 多个 EVPN 出口节点
If you have multiple gateway nodes, you should disable the rp_filter (Strict
Reverse Path Filter) option, because packets can arrive at one node but go out
from another node.
如果您有多个网关节点,应该禁用 rp_filter(严格反向路径过滤)选项,因为数据包可能从一个节点到达,但从另一个节点发出。
Add the following to /etc/sysctl.conf:
将以下内容添加到 /etc/sysctl.conf:
net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0
12.15.2. VXLAN IPSEC Encryption
12.15.2. VXLAN IPSEC 加密
To add IPSEC encryption on top of a VXLAN, this example shows how to use
strongswan.
要在 VXLAN 之上添加 IPSEC 加密,本示例展示了如何使用 strongswan。
You`ll need to reduce the MTU by additional 60 bytes for IPv4 or 80 bytes for
IPv6 to handle encryption.
您需要将 MTU 额外减少 60 字节(IPv4)或 80 字节(IPv6)以处理加密。
So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC)
+ 50 (VXLAN) == 1500).
所以在默认的真实 1500 MTU 下,你需要使用 1370 的 MTU(1370 + 80(IPSEC) + 50(VXLAN) == 1500)。
Install strongswan on the host.
在主机上安装 strongswan。
apt install strongswan
Add configuration to /etc/ipsec.conf. We only need to encrypt traffic from
the VXLAN UDP port 4789.
向 /etc/ipsec.conf 添加配置。我们只需要加密来自 VXLAN UDP 端口 4789 的流量。
conn %default
ike=aes256-sha1-modp1024! # the fastest, but reasonably secure cipher on modern HW
esp=aes256-sha1!
leftfirewall=yes # this is necessary when using Proxmox VE firewall rules
conn output
rightsubnet=%dynamic[udp/4789]
right=%any
type=transport
authby=psk
auto=route
conn input
leftsubnet=%dynamic[udp/4789]
type=transport
authby=psk
auto=route
Generate a pre-shared key with:
使用以下命令生成预共享密钥:
openssl rand -base64 128
and add the key to /etc/ipsec.secrets, so that the file contents looks like:
并将密钥添加到 /etc/ipsec.secrets 文件中,使文件内容如下所示:
: PSK <generatedbase64key>
Copy the PSK and the configuration to all nodes participating in the VXLAN network.
将预共享密钥(PSK)和配置复制到所有参与 VXLAN 网络的节点。
13. Proxmox VE Firewall
13. Proxmox VE 防火墙
Proxmox VE Firewall provides an easy way to protect your IT
infrastructure. You can setup firewall rules for all hosts
inside a cluster, or define rules for virtual machines and
containers. Features like firewall macros, security groups, IP sets
and aliases help to make that task easier.
Proxmox VE 防火墙为保护您的 IT 基础设施提供了简便的方法。您可以为集群内的所有主机设置防火墙规则,或为虚拟机和容器定义规则。防火墙宏、安全组、IP 集合和别名等功能有助于简化这项任务。
While all configuration is stored on the cluster file system, the
iptables-based firewall service runs on each cluster node, and thus provides
full isolation between virtual machines. The distributed nature of
this system also provides much higher bandwidth than a central
firewall solution.
虽然所有配置都存储在集群文件系统中,但基于 iptables 的防火墙服务运行在每个集群节点上,因此能够在虚拟机之间提供完全隔离。该系统的分布式特性也比集中式防火墙解决方案提供了更高的带宽。
The firewall has full support for IPv4 and IPv6. IPv6 support is fully
transparent, and we filter traffic for both protocols by default. So
there is no need to maintain a different set of rules for IPv6.
防火墙完全支持 IPv4 和 IPv6。IPv6 支持是完全透明的,我们默认对两种协议的流量进行过滤。因此,无需为 IPv6 维护一套不同的规则。
13.1. Directions & Zones
13.1. 方向与区域
The Proxmox VE firewall groups the network into multiple logical zones. You can
define rules for each zone independently. Depending on the zone, you can define
rules for incoming, outgoing or forwarded traffic.
Proxmox VE 防火墙将网络分组为多个逻辑区域。您可以为每个区域独立定义规则。根据区域的不同,您可以为入站、出站或转发流量定义规则。
13.1.1. Directions 13.1.1. 方向
There are 3 directions that you can choose from when defining rules for a zone:
定义区域规则时,可以选择 3 个方向:
- In 进入
-
Traffic that is arriving in a zone.
进入区域的流量。 - Out 出站
-
Traffic that is leaving a zone.
离开某个区域的流量。 - Forward 转发
-
Traffic that is passing through a zone. In the host zone this can be routed traffic (when the host is acting as a gateway or performing NAT). At a VNet-level this affects all traffic that is passing by a VNet, including traffic from/to bridged network interfaces.
通过某个区域的流量。在主机区域,这可以是路由流量(当主机充当网关或执行 NAT 时)。在虚拟网络(VNet)级别,这影响所有经过虚拟网络的流量,包括来自/去往桥接网络接口的流量。
|
|
Creating rules for forwarded traffic is currently only possible when
using the new nftables-based proxmox-firewall. Any
forward rules will be ignored by the stock pve-firewall and have no effect! 创建转发流量规则目前仅在使用基于 nftables 的新 proxmox-firewall 时可行。任何转发规则都会被默认的 pve-firewall 忽略且无效! |
13.1.2. Zones 13.1.2. 区域
There are 3 different zones that you can define firewall rules for:
您可以为以下三种不同的区域定义防火墙规则:
- Host 主机
-
Traffic going from/to a host, or traffic that is forwarded by a host. You can define rules for this zone either at the datacenter level or at the host level. Rules at host level take precedence over rules at datacenter level.
流量指的是从主机发出或到达主机的流量,或者由主机转发的流量。您可以在数据中心级别或主机级别为此区域定义规则。主机级别的规则优先于数据中心级别的规则。 - VM 虚拟机
-
Traffic going from/to a VM or CT. You cannot define rules for forwarded traffic, only for incoming / outgoing traffic.
流量指的是从虚拟机(VM)或容器(CT)发出或到达的流量。您不能为转发流量定义规则,只能为进出流量定义规则。 - VNet 虚拟网络(VNet)
-
Traffic passing through a SDN VNet, either from guest to guest or from host to guest and vice-versa. Since this traffic is always forwarded traffic, it is only possible to create rules with direction forward.
通过 SDN 虚拟网络传递的流量,无论是来宾到来宾,还是主机到来宾及反向流量。由于这些流量始终是转发流量,因此只能创建方向为转发的规则。
|
|
Creating rules on a VNet-level is currently only possible when using
the new nftables-based proxmox-firewall. Any VNet-level
rules will be ignored by the stock pve-firewall and have no effect! 目前仅在使用基于 nftables 的新 proxmox-firewall 时,才可以在虚拟网络级别创建规则。任何虚拟网络级别的规则都会被默认的 pve-firewall 忽略,且不会生效! |
13.2. Configuration Files
13.2. 配置文件
All firewall related configuration is stored on the proxmox cluster
file system. So those files are automatically distributed to all
cluster nodes, and the pve-firewall service updates the underlying
iptables rules automatically on changes.
所有防火墙相关的配置都存储在 proxmox 集群文件系统中。因此,这些文件会自动分发到所有集群节点,pve-firewall 服务会在配置更改时自动更新底层的 iptables 规则。
You can configure anything using the GUI (i.e. Datacenter → Firewall,
or on a Node → Firewall), or you can edit the configuration files
directly using your preferred editor.
您可以使用图形用户界面进行任何配置(例如,数据中心 → 防火墙,或在节点 → 防火墙),也可以使用您喜欢的编辑器直接编辑配置文件。
Firewall configuration files contain sections of key-value
pairs. Lines beginning with a # and blank lines are considered
comments. Sections start with a header line containing the section
name enclosed in [ and ].
防火墙配置文件包含键值对的部分。以 # 开头的行和空行被视为注释。部分以包含在 [ 和 ] 中的部分名称的标题行开始。
13.2.1. Cluster Wide Setup
13.2.1. 集群范围设置
The cluster-wide firewall configuration is stored at:
集群范围的防火墙配置存储在:
/etc/pve/firewall/cluster.fw
The configuration can contain the following sections:
配置可以包含以下部分:
- [OPTIONS]
-
This is used to set cluster-wide firewall options.
用于设置集群范围的防火墙选项。 -
ebtables: <boolean> (default = 1)
ebtables: <布尔值>(默认 = 1) -
Enable ebtables rules cluster wide.
启用整个集群的 ebtables 规则。 -
enable: <integer> (0 - N)
enable: <整数> (0 - N) -
Enable or disable the firewall cluster wide.
启用或禁用整个集群的防火墙。 -
log_ratelimit: [enable=]<1|0> [,burst=<integer>] [,rate=<rate>]
log_ratelimit: [enable=]<1|0> [,burst=<整数>] [,rate=<速率>] -
Log ratelimiting settings
日志速率限制设置-
burst=<integer> (0 - N) (default = 5)
burst=<整数> (0 - N)(默认值 = 5) -
Initial burst of packages which will always get logged before the rate is applied
初始突发数据包数,在应用速率限制前这些数据包将始终被记录 -
enable=<boolean> (default = 1)
enable=<布尔值>(默认值 = 1) -
Enable or disable log rate limiting
启用或禁用日志速率限制 -
rate=<rate> (default = 1/second)
rate=<rate>(默认 = 每秒 1 次) -
Frequency with which the burst bucket gets refilled
突发桶被重新填充的频率
-
burst=<integer> (0 - N) (default = 5)
-
policy_forward: <ACCEPT | DROP>
policy_forward: <接受 | 丢弃> -
Forward policy. 转发策略。
-
policy_in: <ACCEPT | DROP | REJECT>
policy_in: <接受 | 丢弃 | 拒绝> -
Input policy. 输入策略。
-
policy_out: <ACCEPT | DROP | REJECT>
policy_out: <接受 | 丢弃 | 拒绝> -
Output policy. 输出策略。
- [RULES] [规则]
-
This sections contains cluster-wide firewall rules for all nodes.
本节包含所有节点的集群范围防火墙规则。 - [IPSET <name>]
-
Cluster wide IP set definitions.
集群范围的 IP 集合定义。 - [GROUP <name>]
-
Cluster wide security group definitions.
集群范围的安全组定义。 - [ALIASES]
-
Cluster wide Alias definitions.
集群范围的别名定义。
Enabling the Firewall 启用防火墙
The firewall is completely disabled by default, so you need to
set the enable option here:
防火墙默认是完全禁用的,因此您需要在此处设置启用选项:
[OPTIONS] # enable firewall (cluster-wide setting, default is disabled) enable: 1
|
|
If you enable the firewall, traffic to all hosts is blocked by
default. Only exceptions is WebGUI(8006) and ssh(22) from your local
network. 如果启用防火墙,默认情况下所有主机的流量都会被阻止。唯一的例外是来自您本地网络的 WebGUI(8006 端口)和 ssh(22 端口)。 |
If you want to administrate your Proxmox VE hosts from remote, you
need to create rules to allow traffic from those remote IPs to the web
GUI (port 8006). You may also want to allow ssh (port 22), and maybe
SPICE (port 3128).
如果您想从远程管理您的 Proxmox VE 主机,您需要创建规则,允许来自这些远程 IP 的流量访问 Web GUI(端口 8006)。您可能还想允许 ssh(端口 22),以及可能的 SPICE(端口 3128)。
|
|
Please open a SSH connection to one of your Proxmox VE hosts before
enabling the firewall. That way you still have access to the host if
something goes wrong . 在启用防火墙之前,请先打开到其中一台 Proxmox VE 主机的 SSH 连接。这样如果出现问题,您仍然可以访问主机。 |
To simplify that task, you can instead create an IPSet called
“management”, and add all remote IPs there. This creates all required
firewall rules to access the GUI from remote.
为了简化这项任务,您可以创建一个名为“management”的 IPSet,并将所有远程 IP 添加到其中。这将创建所有访问远程 GUI 所需的防火墙规则。
13.2.2. Host Specific Configuration
13.2.2. 主机特定配置
Host related configuration is read from:
主机相关配置读取自:
/etc/pve/nodes/<nodename>/host.fw
This is useful if you want to overwrite rules from cluster.fw
config. You can also increase log verbosity, and set netfilter related
options. The configuration can contain the following sections:
如果您想覆盖 cluster.fw 配置中的规则,这非常有用。您还可以增加日志详细程度,并设置与 netfilter 相关的选项。配置可以包含以下部分:
- [OPTIONS]
-
This is used to set host related firewall options.
用于设置主机相关的防火墙选项。 - enable: <boolean>
-
Enable host firewall rules.
启用主机防火墙规则。 -
log_level_forward: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
log_level_forward: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for forwarded traffic.
转发流量的日志级别。 -
log_level_in: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
log_level_in: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for incoming traffic.
传入流量的日志级别。 -
log_level_out: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
log_level_out: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for outgoing traffic.
传出流量的日志级别。 -
log_nf_conntrack: <boolean> (default = 0)
log_nf_conntrack: <布尔值>(默认 = 0) -
Enable logging of conntrack information.
启用 conntrack 信息的日志记录。 -
ndp: <boolean> (default = 0)
ndp: <布尔值>(默认 = 0) -
Enable NDP (Neighbor Discovery Protocol).
启用 NDP(邻居发现协议)。 -
nf_conntrack_allow_invalid: <boolean> (default = 0)
nf_conntrack_allow_invalid: <boolean>(默认值 = 0) -
Allow invalid packets on connection tracking.
允许连接跟踪中的无效数据包。 -
nf_conntrack_helpers: <string> (default = ``)
nf_conntrack_helpers: <string>(默认值 = ``) -
Enable conntrack helpers for specific protocols. Supported protocols: amanda, ftp, irc, netbios-ns, pptp, sane, sip, snmp, tftp
为特定协议启用连接跟踪助手。支持的协议:amanda、ftp、irc、netbios-ns、pptp、sane、sip、snmp、tftp -
nf_conntrack_max: <integer> (32768 - N) (default = 262144)
nf_conntrack_max: <整数> (32768 - N) (默认 = 262144) -
Maximum number of tracked connections.
最大跟踪连接数。 -
nf_conntrack_tcp_timeout_established: <integer> (7875 - N) (default = 432000)
nf_conntrack_tcp_timeout_established: <整数> (7875 - N) (默认 = 432000) -
Conntrack established timeout.
Conntrack 已建立连接的超时时间。 -
nf_conntrack_tcp_timeout_syn_recv: <integer> (30 - 60) (default = 60)
nf_conntrack_tcp_timeout_syn_recv: <整数> (30 - 60) (默认 = 60) -
Conntrack syn recv timeout.
Conntrack syn 接收超时。 -
nftables: <boolean> (default = 0)
nftables: <布尔值> (默认 = 0) -
Enable nftables based firewall (tech preview)
启用基于 nftables 的防火墙(技术预览) - nosmurfs: <boolean>
-
Enable SMURFS filter. 启用 SMURFS 过滤器。
-
protection_synflood: <boolean> (default = 0)
protection_synflood: <boolean> (默认 = 0) -
Enable synflood protection
启用 synflood 保护 -
protection_synflood_burst: <integer> (default = 1000)
protection_synflood_burst: <整数>(默认值 = 1000) -
Synflood protection rate burst by ip src.
按源 IP 的 Synflood 保护速率突发值。 -
protection_synflood_rate: <integer> (default = 200)
protection_synflood_rate: <整数>(默认值 = 200) -
Synflood protection rate syn/sec by ip src.
按源 IP 的 Synflood 保护速率,单位为 syn/秒。 -
smurf_log_level: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
smurf_log_level: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for SMURFS filter.
SMURFS 过滤器的日志级别。 -
tcp_flags_log_level: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
tcp_flags_log_level: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for illegal tcp flags filter.
非法 TCP 标志过滤器的日志级别。 -
tcpflags: <boolean> (default = 0)
tcpflags: <布尔值>(默认 = 0) -
Filter illegal combinations of TCP flags.
过滤非法的 TCP 标志组合。 - [RULES] [规则]
-
This sections contains host specific firewall rules.
本节包含主机特定的防火墙规则。
13.2.3. VM/Container Configuration
13.2.3. 虚拟机/容器配置
VM firewall configuration is read from:
虚拟机防火墙配置读取自:
/etc/pve/firewall/<VMID>.fw
and contains the following data:
并包含以下数据:
- [OPTIONS]
-
This is used to set VM/Container related firewall options.
这用于设置虚拟机/容器相关的防火墙选项。 -
dhcp: <boolean> (default = 0)
dhcp: <布尔值>(默认 = 0) -
Enable DHCP. 启用 DHCP。
-
enable: <boolean> (default = 0)
enable: <布尔值>(默认 = 0) -
Enable/disable firewall rules.
启用/禁用防火墙规则。 - ipfilter: <boolean> ipfilter: <布尔值>
-
Enable default IP filters. This is equivalent to adding an empty ipfilter-net<id> ipset for every interface. Such ipsets implicitly contain sane default restrictions such as restricting IPv6 link local addresses to the one derived from the interface’s MAC address. For containers the configured IP addresses will be implicitly added.
启用默认的 IP 过滤器。这相当于为每个接口添加一个空的 ipfilter-net<id> ipset。此类 ipset 隐式包含合理的默认限制,例如将 IPv6 链路本地地址限制为从接口的 MAC 地址派生的地址。对于容器,配置的 IP 地址将被隐式添加。 -
log_level_in: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
log_level_in: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for incoming traffic.
传入流量的日志级别。 -
log_level_out: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
log_level_out: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for outgoing traffic.
传出流量的日志级别。 -
macfilter: <boolean> (default = 1)
macfilter: <boolean>(默认 = 1) -
Enable/disable MAC address filter.
启用/禁用 MAC 地址过滤。 -
ndp: <boolean> (default = 0)
ndp: <boolean>(默认值 = 0) -
Enable NDP (Neighbor Discovery Protocol).
启用 NDP(邻居发现协议)。 -
policy_in: <ACCEPT | DROP | REJECT>
policy_in: <接受 | 丢弃 | 拒绝> -
Input policy. 输入策略。
-
policy_out: <ACCEPT | DROP | REJECT>
policy_out: <接受 | 丢弃 | 拒绝> -
Output policy. 输出策略。
- radv: <boolean> radv: <布尔值>
-
Allow sending Router Advertisement.
允许发送路由器通告。 - [RULES] [规则]
-
This sections contains VM/Container firewall rules.
本节包含虚拟机/容器防火墙规则。 - [IPSET <name>]
-
IP set definitions. IP 集合定义。
- [ALIASES] [别名]
-
IP Alias definitions. IP 别名定义。
Enabling the Firewall for VMs and Containers
为虚拟机和容器启用防火墙
Each virtual network device has its own firewall enable flag. So you
can selectively enable the firewall for each interface. This is
required in addition to the general firewall enable option.
每个虚拟网络设备都有自己的防火墙启用标志。因此,您可以为每个接口选择性地启用防火墙。这是除了通用防火墙启用选项之外的必要设置。
13.2.4. VNet Configuration
13.2.4. 虚拟网络配置
VNet related configuration is read from:
虚拟网络相关配置读取自:
/etc/pve/sdn/firewall/<vnet_name>.fw
This can be used for setting firewall configuration globally on a VNet level,
without having to set firewall rules for each VM inside the VNet separately. It
can only contain rules for the FORWARD direction, since there is no notion of
incoming or outgoing traffic. This affects all traffic travelling from one
bridge port to another, including the host interface.
这可以用于在虚拟网络级别全局设置防火墙配置,而无需为虚拟网络内的每个虚拟机单独设置防火墙规则。它只能包含 FORWARD 方向的规则,因为不存在入站或出站流量的概念。这会影响从一个桥接端口到另一个桥接端口的所有流量,包括主机接口。
|
|
This feature is currently only available for the new
nftables-based proxmox-firewall 此功能目前仅适用于基于新 nftables 的 proxmox-firewall |
Since traffic passing the FORWARD chain is bi-directional, you need to create
rules for both directions if you want traffic to pass both ways. For instance if
HTTP traffic for a specific host should be allowed, you would need to create the
following rules:
由于通过 FORWARD 链的流量是双向的,如果您希望流量双向通过,则需要为两个方向创建规则。例如,如果应允许特定主机的 HTTP 流量,则需要创建以下规则:
FORWARD ACCEPT -dest 10.0.0.1 -dport 80 FORWARD ACCEPT -source 10.0.0.1 -sport 80
- [OPTIONS] [选项]
-
This is used to set VNet related firewall options.
此项用于设置与虚拟网络相关的防火墙选项。 -
enable: <boolean> (default = 0)
enable: <boolean>(默认 = 0) -
Enable/disable firewall rules.
启用/禁用防火墙规则。 -
log_level_forward: <alert | crit | debug | emerg | err | info | nolog | notice | warning>
log_level_forward: <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for forwarded traffic.
转发流量的日志级别。 -
policy_forward: <ACCEPT | DROP>
policy_forward: <接受 | 丢弃> -
Forward policy. 转发策略。
- [RULES] [规则]
-
This section contains VNet specific firewall rules.
本节包含特定于虚拟网络的防火墙规则。
13.3. Firewall Rules 13.3. 防火墙规则
Firewall rules consists of a direction (IN, OUT or FORWARD) and an
action (ACCEPT, DENY, REJECT). You can also specify a macro
name. Macros contain predefined sets of rules and options. Rules can be
disabled by prefixing them with |.
防火墙规则由方向(IN、OUT 或 FORWARD)和动作(ACCEPT、DENY、REJECT)组成。您还可以指定宏名称。宏包含预定义的规则和选项集合。通过在规则前加上 | 可以禁用该规则。
[RULES] DIRECTION ACTION [OPTIONS] |DIRECTION ACTION [OPTIONS] # disabled rule DIRECTION MACRO(ACTION) [OPTIONS] # use predefined macro
The following options can be used to refine rule matches.
以下选项可用于细化规则匹配。
- --dest <string>
-
Restrict packet destination address. This can refer to a single IP address, an IP set (+ipsetname) or an IP alias definition. You can also specify an address range like 20.34.101.207-201.3.9.99, or a list of IP addresses and networks (entries are separated by comma). Please do not mix IPv4 and IPv6 addresses inside such lists.
限制数据包的目标地址。可以是单个 IP 地址、IP 集合(+ipsetname)或 IP 别名定义。您也可以指定地址范围,如 20.34.101.207-201.3.9.99,或者 IP 地址和网络的列表(条目用逗号分隔)。请勿在此类列表中混合使用 IPv4 和 IPv6 地址。 - --dport <string>
-
Restrict TCP/UDP destination port. You can use service names or simple numbers (0-65535), as defined in /etc/services. Port ranges can be specified with \d+:\d+, for example 80:85, and you can use comma separated list to match several ports or ranges.
限制 TCP/UDP 目标端口。您可以使用服务名称或简单数字(0-65535),如/etc/services 中定义的。端口范围可以用\d+:\d+表示,例如 80:85,且可以使用逗号分隔的列表来匹配多个端口或范围。 - --icmp-type <string>
-
Specify icmp-type. Only valid if proto equals icmp or icmpv6/ipv6-icmp.
指定 icmp 类型。仅当 proto 等于 icmp 或 icmpv6/ipv6-icmp 时有效。 - --iface <string>
-
Network interface name. You have to use network configuration key names for VMs and containers (net\d+). Host related rules can use arbitrary strings.
网络接口名称。对于虚拟机和容器,必须使用网络配置键名(net\d+)。主机相关规则可以使用任意字符串。 -
--log <alert | crit | debug | emerg | err | info | nolog | notice | warning>
--log <alert | crit | 调试 | emerg | err | info | nolog | notice | warning> -
Log level for firewall rule.
防火墙规则的日志级别。 - --proto <string>
-
IP protocol. You can use protocol names (tcp/udp) or simple numbers, as defined in /etc/protocols.
IP 协议。您可以使用协议名称(tcp/udp)或简单数字,定义见 /etc/protocols。 - --source <string>
-
Restrict packet source address. This can refer to a single IP address, an IP set (+ipsetname) or an IP alias definition. You can also specify an address range like 20.34.101.207-201.3.9.99, or a list of IP addresses and networks (entries are separated by comma). Please do not mix IPv4 and IPv6 addresses inside such lists.
限制数据包源地址。可以是单个 IP 地址、IP 集合(+ipsetname)或 IP 别名定义。您也可以指定地址范围,如 20.34.101.207-201.3.9.99,或 IP 地址和网络的列表(条目用逗号分隔)。请勿在此类列表中混合使用 IPv4 和 IPv6 地址。 - --sport <string>
-
Restrict TCP/UDP source port. You can use service names or simple numbers (0-65535), as defined in /etc/services. Port ranges can be specified with \d+:\d+, for example 80:85, and you can use comma separated list to match several ports or ranges.
限制 TCP/UDP 源端口。您可以使用服务名称或简单数字(0-65535),如/etc/services 中定义。端口范围可用\d+:\d+格式指定,例如 80:85,且可以使用逗号分隔的列表匹配多个端口或范围。
Here are some examples: 以下是一些示例:
[RULES] IN SSH(ACCEPT) -i net0 IN SSH(ACCEPT) -i net0 # a comment IN SSH(ACCEPT) -i net0 -source 192.168.2.192 # only allow SSH from 192.168.2.192 IN SSH(ACCEPT) -i net0 -source 10.0.0.1-10.0.0.10 # accept SSH for IP range IN SSH(ACCEPT) -i net0 -source 10.0.0.1,10.0.0.2,10.0.0.3 #accept ssh for IP list IN SSH(ACCEPT) -i net0 -source +mynetgroup # accept ssh for ipset mynetgroup IN SSH(ACCEPT) -i net0 -source myserveralias #accept ssh for alias myserveralias |IN SSH(ACCEPT) -i net0 # disabled rule IN DROP # drop all incoming packages OUT ACCEPT # accept all outgoing packages
13.4. Security Groups 13.4. 安全组
A security group is a collection of rules, defined at cluster level, which
can be used in all VMs' rules. For example you can define a group named
“webserver” with rules to open the http and https ports.
安全组是在集群级别定义的一组规则,可以在所有虚拟机的规则中使用。例如,你可以定义一个名为“webserver”的组,包含打开 http 和 https 端口的规则。
# /etc/pve/firewall/cluster.fw [group webserver] IN ACCEPT -p tcp -dport 80 IN ACCEPT -p tcp -dport 443
Then, you can add this group to a VM’s firewall
然后,你可以将该组添加到虚拟机的防火墙中。
# /etc/pve/firewall/<VMID>.fw [RULES] GROUP webserver
13.5. IP Aliases 13.5. IP 别名
IP Aliases allow you to associate IP addresses of networks with a
name. You can then refer to those names:
IP 别名允许您将网络的 IP 地址与一个名称关联。然后,您可以引用这些名称:
-
inside IP set definitions
在 IP 集合定义中 -
in source and dest properties of firewall rules
在防火墙规则的源和目标属性中
13.5.1. Standard IP Alias local_network
13.5.1. 标准 IP 别名 local_network
This alias is automatically defined. Please use the following command
to see assigned values:
此别名是自动定义的。请使用以下命令查看分配的值:
# pve-firewall localnet local hostname: example local IP address: 192.168.2.100 network auto detect: 192.168.0.0/20 using detected local_network: 192.168.0.0/20
The firewall automatically sets up rules to allow everything needed
for cluster communication (corosync, API, SSH) using this alias.
防火墙会自动设置规则,允许使用此别名进行集群通信所需的一切(corosync、API、SSH)。
The user can overwrite these values in the cluster.fw alias
section. If you use a single host on a public network, it is better to
explicitly assign the local IP address
用户可以在 cluster.fw 别名部分覆盖这些值。如果您在公共网络上使用单个主机,最好显式分配本地 IP 地址。
# /etc/pve/firewall/cluster.fw [ALIASES] local_network 1.2.3.4 # use the single IP address
13.6. IP Sets 13.6. IP 集合
IP sets can be used to define groups of networks and hosts. You can
refer to them with ‘+name` in the firewall rules’ source and dest
properties.
IP 集合可用于定义网络和主机组。您可以在防火墙规则的源和目标属性中使用‘+name’来引用它们。
The following example allows HTTP traffic from the management IP
set.
以下示例允许来自管理 IP 集合的 HTTP 流量。
IN HTTP(ACCEPT) -source +management
13.6.1. Standard IP set management
13.6.1. 标准 IP 集合管理
This IP set applies only to host firewalls (not VM firewalls). Those
IPs are allowed to do normal management tasks (Proxmox VE GUI, VNC, SPICE,
SSH).
此 IP 集合仅适用于主机防火墙(不适用于虚拟机防火墙)。这些 IP 被允许执行正常的管理任务(Proxmox VE 图形界面、VNC、SPICE、SSH)。
The local cluster network is automatically added to this IP set (alias
cluster_network), to enable inter-host cluster
communication. (multicast,ssh,…)
本地集群网络会自动添加到此 IP 集合(别名 cluster_network),以启用主机间的集群通信。(多播、SSH 等)
# /etc/pve/firewall/cluster.fw [IPSET management] 192.168.2.10 192.168.2.10/24
13.6.2. Standard IP set blacklist
13.6.2. 标准 IP 集合黑名单
Traffic from these IPs is dropped by every host’s and VM’s firewall.
来自这些 IP 的流量会被每个主机和虚拟机的防火墙丢弃。
# /etc/pve/firewall/cluster.fw [IPSET blacklist] 77.240.159.182 213.87.123.0/24
13.6.3. Standard IP set ipfilter-net*
13.6.3. 标准 IP 集合 ipfilter-net*
These filters belong to a VM’s network interface and are mainly used to prevent
IP spoofing. If such a set exists for an interface then any outgoing traffic
with a source IP not matching its interface’s corresponding ipfilter set will
be dropped.
这些过滤器属于虚拟机的网络接口,主要用于防止 IP 欺骗。如果某个接口存在这样的集合,那么任何源 IP 不匹配该接口对应 ipfilter 集合的出站流量都会被丢弃。
For containers with configured IP addresses these sets, if they exist (or are
activated via the general IP Filter option in the VM’s firewall’s options
tab), implicitly contain the associated IP addresses.
对于配置了 IP 地址的容器,这些集合如果存在(或通过虚拟机防火墙选项卡中的通用 IP 过滤器选项激活),会隐式包含相关的 IP 地址。
For both virtual machines and containers they also implicitly contain the
standard MAC-derived IPv6 link-local address in order to allow the neighbor
discovery protocol to work.
对于虚拟机和容器,这些集合还隐式包含标准的基于 MAC 地址派生的 IPv6 链路本地地址,以允许邻居发现协议正常工作。
/etc/pve/firewall/<VMID>.fw [IPSET ipfilter-net0] # only allow specified IPs on net0 192.168.2.10
13.7. Services and Commands
13.7. 服务和命令
The firewall runs two service daemons on each node:
防火墙在每个节点上运行两个服务守护进程:
-
pvefw-logger: NFLOG daemon (ulogd replacement).
pvefw-logger:NFLOG 守护进程(ulogd 替代品)。 -
pve-firewall: updates iptables rules
pve-firewall:更新 iptables 规则
There is also a CLI command named pve-firewall, which can be used to
start and stop the firewall service:
还有一个名为 pve-firewall 的命令行工具,可以用来启动和停止防火墙服务:
# pve-firewall start # pve-firewall stop
To get the status use:
要查看状态,请使用:
# pve-firewall status
The above command reads and compiles all firewall rules, so you will
see warnings if your firewall configuration contains any errors.
上述命令会读取并编译所有防火墙规则,因此如果防火墙配置中存在任何错误,你会看到警告。
If you want to see the generated iptables rules you can use:
如果你想查看生成的 iptables 规则,可以使用:
# iptables-save
13.8. Default firewall rules
13.8. 默认防火墙规则
The following traffic is filtered by the default firewall configuration:
以下流量会被默认防火墙配置过滤:
13.8.1. Datacenter incoming/outgoing DROP/REJECT
13.8.1. 数据中心进出站 DROP/REJECT
If the input or output policy for the firewall is set to DROP or REJECT, the
following traffic is still allowed for all Proxmox VE hosts in the cluster:
如果防火墙的输入或输出策略设置为 DROP 或 REJECT,集群中所有 Proxmox VE 主机仍允许以下流量:
-
traffic over the loopback interface
环回接口上的流量 -
already established connections
已建立的连接 -
traffic using the IGMP protocol
使用 IGMP 协议的流量 -
TCP traffic from management hosts to port 8006 in order to allow access to the web interface
来自管理主机到端口 8006 的 TCP 流量,以允许访问网页界面 -
TCP traffic from management hosts to the port range 5900 to 5999 allowing traffic for the VNC web console
来自管理主机到端口范围 5900 到 5999 的 TCP 流量,允许 VNC 网页控制台的流量 -
TCP traffic from management hosts to port 3128 for connections to the SPICE proxy
来自管理主机到端口 3128 的 TCP 流量,用于连接 SPICE 代理 -
TCP traffic from management hosts to port 22 to allow ssh access
来自管理主机到端口 22 的 TCP 流量,允许 SSH 访问 -
UDP traffic in the cluster network to ports 5405-5412 for corosync
集群网络中到端口 5405-5412 的 UDP 流量,用于 corosync -
UDP multicast traffic in the cluster network
集群网络中的 UDP 组播流量 -
ICMP traffic type 3 (Destination Unreachable), 4 (congestion control) or 11 (Time Exceeded)
ICMP 流量类型 3(目标不可达)、4(拥塞控制)或 11(超时)
The following traffic is dropped, but not logged even with logging enabled:
以下流量会被丢弃,但即使启用日志记录也不会记录:
-
TCP connections with invalid connection state
具有无效连接状态的 TCP 连接 -
Broadcast, multicast and anycast traffic not related to corosync, i.e., not coming through ports 5405-5412
与 corosync 无关的广播、多播和任播流量,即不通过 5405-5412 端口的流量 -
TCP traffic to port 43
到端口 43 的 TCP 流量 -
UDP traffic to ports 135 and 445
到端口 135 和 445 的 UDP 流量 -
UDP traffic to the port range 137 to 139
到端口范围 137 至 139 的 UDP 流量 -
UDP traffic form source port 137 to port range 1024 to 65535
UDP 流量,源端口为 137,目标端口范围为 1024 到 65535 -
UDP traffic to port 1900
UDP 流量,目标端口为 1900 -
TCP traffic to port 135, 139 and 445
TCP 流量,目标端口为 135、139 和 445 -
UDP traffic originating from source port 53
UDP 流量,源端口为 53
The rest of the traffic is dropped or rejected, respectively, and also logged.
This may vary depending on the additional options enabled in
Firewall → Options, such as NDP, SMURFS and TCP flag filtering.
其余的流量将被丢弃或拒绝,并且会被记录。这可能会根据在防火墙 → 选项中启用的附加选项(如 NDP、SMURFS 和 TCP 标志过滤)而有所不同。
Please inspect the output of the
请检查
# iptables-save
system command to see the firewall chains and rules active on your system.
This output is also included in a System Report, accessible over a node’s
subscription tab in the web GUI, or through the pvereport command-line tool.
系统命令的输出,以查看系统上活动的防火墙链和规则。该输出也包含在系统报告中,可以通过节点的订阅标签在网页 GUI 中访问,或通过 pvereport 命令行工具查看。
13.8.2. VM/CT incoming/outgoing DROP/REJECT
13.8.2. 虚拟机/容器入站/出站丢弃/拒绝
This drops or rejects all the traffic to the VMs, with some exceptions for
DHCP, NDP, Router Advertisement, MAC and IP filtering depending on the set
configuration. The same rules for dropping/rejecting packets are inherited
from the datacenter, while the exceptions for accepted incoming/outgoing
traffic of the host do not apply.
这会丢弃或拒绝所有到虚拟机的流量,但根据设置的配置,对 DHCP、NDP、路由器通告、MAC 和 IP 过滤有一些例外。丢弃/拒绝数据包的相同规则继承自数据中心,而主机接受的进出流量的例外则不适用。
Again, you can use iptables-save (see above)
to inspect all rules and chains applied.
同样,你可以使用 iptables-save(见上文)来检查所有应用的规则和链。
13.9. Logging of firewall rules
13.9. 防火墙规则的日志记录
By default, all logging of traffic filtered by the firewall rules is disabled.
To enable logging, the loglevel for incoming and/or outgoing traffic has to be
set in Firewall → Options. This can be done for the host as well as for the
VM/CT firewall individually. By this, logging of Proxmox VE’s standard firewall rules
is enabled and the output can be observed in Firewall → Log.
Further, only some dropped or rejected packets are logged for the standard rules
(see default firewall rules).
默认情况下,防火墙规则过滤的所有流量日志记录均被禁用。要启用日志记录,必须在防火墙 → 选项中设置入站和/或出站流量的日志级别。这可以针对主机以及虚拟机/容器的防火墙分别设置。通过此操作,Proxmox VE 标准防火墙规则的日志记录被启用,输出可以在防火墙 → 日志中查看。此外,标准规则仅记录部分被丢弃或拒绝的数据包(参见默认防火墙规则)。
loglevel does not affect how much of the filtered traffic is logged. It
changes a LOGID appended as prefix to the log output for easier filtering and
post-processing.
loglevel 不影响记录的过滤流量的多少。它改变了附加在日志输出前缀的 LOGID,以便更容易进行过滤和后期处理。
loglevel is one of the following flags:
loglevel 是以下标志之一:
| loglevel | LOGID |
|---|---|
nolog |
— |
emerg 紧急 |
0 |
alert 警报 |
1 |
crit 严重 |
2 |
err 错误 |
3 |
warning 警告 |
4 |
notice 通知 |
5 |
info 信息 |
6 |
debug 调试 |
7 |
A typical firewall log output looks like this:
典型的防火墙日志输出如下所示:
VMID LOGID CHAIN TIMESTAMP POLICY: PACKET_DETAILS
In case of the host firewall, VMID is equal to 0.
对于主机防火墙,VMID 等于 0。
13.9.1. Logging of user defined firewall rules
13.9.1. 用户定义防火墙规则的日志记录
In order to log packets filtered by user-defined firewall rules, it is possible
to set a log-level parameter for each rule individually.
This allows to log in a fine grained manner and independent of the log-level
defined for the standard rules in Firewall → Options.
为了记录被用户定义防火墙规则过滤的数据包,可以为每条规则单独设置日志级别参数。这允许以细粒度的方式记录日志,并且独立于在防火墙 → 选项中为标准规则定义的日志级别。
While the loglevel for each individual rule can be defined or changed easily
in the web UI during creation or modification of the rule, it is possible to set
this also via the corresponding pvesh API calls.
虽然每条规则的日志级别可以在创建或修改规则时通过网页界面轻松定义或更改,但也可以通过相应的 pvesh API 调用来设置。
Further, the log-level can also be set via the firewall configuration file by
appending a -log <loglevel> to the selected rule (see
possible log-levels).
此外,日志级别还可以通过防火墙配置文件设置,只需在选定的规则后附加 -log <loglevel>(参见可能的日志级别)。
For example, the following two are identical:
例如,以下两者是相同的:
IN REJECT -p icmp -log nolog IN REJECT -p icmp
whereas 而
IN REJECT -p icmp -log debug
produces a log output flagged with the debug level.
产生一个带有调试级别标记的日志输出。
13.10. Tips and Tricks
13.10. 提示与技巧
13.10.1. How to allow FTP
13.10.1. 如何允许 FTP
FTP is an old style protocol which uses port 21 and several other dynamic ports. So you
need a rule to accept port 21. In addition, you need to load the ip_conntrack_ftp module.
So please run:
FTP 是一种使用端口 21 及多个其他动态端口的旧式协议。因此,您需要一条规则来接受端口 21。此外,您还需要加载 ip_conntrack_ftp 模块。请执行:
modprobe ip_conntrack_ftp
and add ip_conntrack_ftp to /etc/modules (so that it works after a reboot).
并将 ip_conntrack_ftp 添加到 /etc/modules(以便重启后仍然生效)。
13.10.2. Suricata IPS integration
13.10.2. Suricata IPS 集成
If you want to use the Suricata IPS
(Intrusion Prevention System), it’s possible.
如果您想使用 Suricata IPS(入侵防御系统),这是可行的。
Packets will be forwarded to the IPS only after the firewall ACCEPTed
them.
数据包只有在防火墙接受(ACCEPT)后才会被转发到 IPS。
Rejected/Dropped firewall packets don’t go to the IPS.
被拒绝/丢弃的防火墙数据包不会发送到 IPS。
Install suricata on proxmox host:
在 proxmox 主机上安装 suricata:
# apt-get install suricata # modprobe nfnetlink_queue
Don’t forget to add nfnetlink_queue to /etc/modules for next reboot.
别忘了将 nfnetlink_queue 添加到/etc/modules,以便下次重启生效。
Then, enable IPS for a specific VM with:
然后,使用以下命令为特定虚拟机启用 IPS:
# /etc/pve/firewall/<VMID>.fw [OPTIONS] ips: 1 ips_queues: 0
ips_queues will bind a specific cpu queue for this VM.
ips_queues 将为此虚拟机绑定一个特定的 CPU 队列。
Available queues are defined in
可用的队列定义在
# /etc/default/suricata NFQUEUE=0
13.11. Notes on IPv6
13.11. IPv6 说明
The firewall contains a few IPv6 specific options. One thing to note is that
IPv6 does not use the ARP protocol anymore, and instead uses NDP (Neighbor
Discovery Protocol) which works on IP level and thus needs IP addresses to
succeed. For this purpose link-local addresses derived from the interface’s MAC
address are used. By default the NDP option is enabled on both host and VM
level to allow neighbor discovery (NDP) packets to be sent and received.
防火墙包含一些特定于 IPv6 的选项。需要注意的是,IPv6 不再使用 ARP 协议,而是使用 NDP(邻居发现协议),该协议在 IP 层工作,因此需要 IP 地址才能成功。为此,使用了从接口的 MAC 地址派生的链路本地地址。默认情况下,主机和虚拟机级别都启用了 NDP 选项,以允许发送和接收邻居发现(NDP)数据包。
Beside neighbor discovery NDP is also used for a couple of other things, like
auto-configuration and advertising routers.
除了邻居发现,NDP 还用于其他一些功能,比如自动配置和路由器通告。
By default VMs are allowed to send out router solicitation messages (to query
for a router), and to receive router advertisement packets. This allows them to
use stateless auto configuration. On the other hand VMs cannot advertise
themselves as routers unless the “Allow Router Advertisement” (radv: 1) option
is set.
默认情况下,虚拟机允许发送路由器请求消息(用于查询路由器),并接收路由器通告包。这使它们能够使用无状态自动配置。另一方面,除非设置了“允许路由器通告”(radv: 1)选项,否则虚拟机不能将自己通告为路由器。
As for the link local addresses required for NDP, there’s also an “IP Filter”
(ipfilter: 1) option which can be enabled which has the same effect as adding
an ipfilter-net* ipset for each of the VM’s network interfaces containing the
corresponding link local addresses. (See the
Standard IP set ipfilter-net* section for details.)
至于 NDP 所需的链路本地地址,还有一个“IP 过滤器”(ipfilter: 1)选项可以启用,其效果相当于为每个虚拟机的网络接口添加一个包含相应链路本地地址的 ipfilter-net* ipset。(详情见标准 IP 集合 ipfilter-net* 部分。)
13.12. Ports used by Proxmox VE
13.12. Proxmox VE 使用的端口
-
Web interface: 8006 (TCP, HTTP/1.1 over TLS)
网页界面:8006(TCP,基于 TLS 的 HTTP/1.1) -
VNC Web console: 5900-5999 (TCP, WebSocket)
VNC 网页控制台:5900-5999(TCP,WebSocket) -
SPICE proxy: 3128 (TCP)
SPICE 代理:3128(TCP) -
sshd (used for cluster actions): 22 (TCP)
sshd(用于集群操作):22(TCP) -
rpcbind: 111 (UDP) rpcbind:111(UDP)
-
sendmail: 25 (TCP, outgoing)
sendmail:25(TCP,出站) -
corosync cluster traffic: 5405-5412 UDP
corosync 集群流量:5405-5412 UDP -
live migration (VM memory and local-disk data): 60000-60050 (TCP)
实时迁移(虚拟机内存和本地磁盘数据):60000-60050(TCP)
13.13. nftables
As an alternative to pve-firewall we offer proxmox-firewall, which is an
implementation of the Proxmox VE firewall based on the newer
nftables
rather than iptables.
作为 pve-firewall 的替代方案,我们提供了 proxmox-firewall,它是基于较新的 nftables 而非 iptables 实现的 Proxmox VE 防火墙。
|
|
proxmox-firewall is currently in tech preview. There might be bugs or
incompatibilities with the original firewall. It is currently not suited for
production use. proxmox-firewall 目前处于技术预览阶段。可能存在与原防火墙的错误或不兼容情况。目前不适合生产环境使用。 |
This implementation uses the same configuration files and configuration format,
so you can use your old configuration when switching. It provides the exact same
functionality with a few exceptions:
该实现使用相同的配置文件和配置格式,因此切换时可以使用旧的配置。它提供了完全相同的功能,但有一些例外:
-
REJECT is currently not possible for guest traffic (traffic will instead be dropped).
REJECT 目前不适用于来宾流量(流量将被丢弃)。 -
Using the NDP, Router Advertisement or DHCP options will always create firewall rules, irregardless of your default policy.
使用 NDP、路由器通告或 DHCP 选项总会创建防火墙规则,无论您的默认策略如何。 -
firewall rules for guests are evaluated even for connections that have conntrack table entries.
即使连接已有 conntrack 表条目,来宾的防火墙规则仍会被评估。
13.13.1. Installation and Usage
13.13.1. 安装与使用
Install the proxmox-firewall package:
安装 proxmox-firewall 包:
apt install proxmox-firewall
Enable the nftables backend via the Web UI on your hosts (Host > Firewall >
Options > nftables), or by enabling it in the configuration file for your hosts
(/etc/pve/nodes/<node_name>/host.fw):
通过 Web UI 在主机上启用 nftables 后端(主机 > 防火墙 > 选项 > nftables),或者在主机的配置文件中启用它(/etc/pve/nodes/<node_name>/host.fw):
[OPTIONS] nftables: 1
|
|
After enabling/disabling proxmox-firewall, all running VMs and
containers need to be restarted for the old/new firewall to work properly. 启用或禁用 proxmox-firewall 后,所有正在运行的虚拟机和容器都需要重启,以使旧的或新的防火墙正常工作。 |
After setting the nftables configuration key, the new proxmox-firewall
service will take over. You can check if the new service is working by
checking the systemctl status of proxmox-firewall:
设置 nftables 配置键后,新的 proxmox-firewall 服务将接管。您可以通过检查 proxmox-firewall 的 systemctl 状态来确认新服务是否正常运行:
systemctl status proxmox-firewall
You can also examine the generated ruleset. You can find more information about
this in the section Helpful Commands.
You should also check whether pve-firewall is no longer generating iptables
rules, you can find the respective commands in the
Services and Commands section.
您还可以检查生成的规则集。有关详细信息,请参阅“有用命令”一节。您还应检查 pve-firewall 是否不再生成 iptables 规则,相关命令可在“服务和命令”一节中找到。
Switching back to the old firewall can be done by simply setting the
configuration value back to 0 / No.
切换回旧防火墙只需将配置值设置回 0 / 否即可。
13.13.2. Usage 13.13.2. 使用方法
proxmox-firewall will create two tables that are managed by the
proxmox-firewall service: proxmox-firewall and proxmox-firewall-guests. If
you want to create custom rules that live outside the Proxmox VE firewall
configuration you can create your own tables to manage your custom firewall
rules. proxmox-firewall will only touch the tables it generates, so you can
easily extend and modify the behavior of the proxmox-firewall by adding your
own tables.
proxmox-firewall 会创建两个由 proxmox-firewall 服务管理的表:proxmox-firewall 和 proxmox-firewall-guests。如果您想创建位于 Proxmox VE 防火墙配置之外的自定义规则,可以创建自己的表来管理自定义防火墙规则。proxmox-firewall 只会操作它生成的表,因此您可以通过添加自己的表轻松扩展和修改 proxmox-firewall 的行为。
Instead of using the pve-firewall command, the nftables-based firewall uses
proxmox-firewall. It is a systemd service, so you can start and stop it via
systemctl:
防火墙不再使用 pve-firewall 命令,而是使用基于 nftables 的 proxmox-firewall。它是一个 systemd 服务,因此你可以通过 systemctl 启动和停止它:
systemctl start proxmox-firewall systemctl stop proxmox-firewall
Stopping the firewall service will remove all generated rules.
停止防火墙服务将移除所有生成的规则。
To query the status of the firewall, you can query the status of the systemctl
service:
要查询防火墙的状态,可以查询 systemctl 服务的状态:
systemctl status proxmox-firewall
13.13.3. Helpful Commands
13.13.3. 有用的命令
You can check the generated ruleset via the following command:
您可以通过以下命令检查生成的规则集:
nft list ruleset
If you want to debug proxmox-firewall you can simply run the daemon in
foreground with the RUST_LOG environment variable set to trace. This should
provide you with detailed debugging output:
如果您想调试 proxmox-firewall,可以简单地在前台运行守护进程,并将 RUST_LOG 环境变量设置为 trace。这样可以为您提供详细的调试输出:
RUST_LOG=trace /usr/libexec/proxmox/proxmox-firewall
You can also edit the systemctl service if you want to have detailed output for
your firewall daemon:
如果您想为防火墙守护进程获得详细输出,也可以编辑 systemctl 服务:
systemctl edit proxmox-firewall
Then you need to add the override for the RUST_LOG environment variable:
然后,您需要为 RUST_LOG 环境变量添加覆盖设置:
[Service] Environment="RUST_LOG=trace"
This will generate a large amount of logs very quickly, so only use this for
debugging purposes. Other, less verbose, log levels are info and debug.
这将非常快速地生成大量日志,因此仅用于调试目的。其他不那么详细的日志级别有 info 和调试。
Running in foreground writes the log output to STDERR, so you can redirect it
with the following command (e.g. for submitting logs to the community forum):
前台运行会将日志输出写入 STDERR,因此你可以使用以下命令重定向它(例如,用于向社区论坛提交日志):
RUST_LOG=trace /usr/libexec/proxmox/proxmox-firewall 2> firewall_log_$(hostname).txt
It can be helpful to trace packet flow through the different chains in order to
debug firewall rules. This can be achieved by setting nftrace to 1 for packets
that you want to track. It is advisable that you do not set this flag for all
packets, in the example below we only examine ICMP packets.
跟踪数据包通过不同链的流动有助于调试防火墙规则。通过将 nftrace 设置为 1,可以跟踪你想要追踪的数据包。建议不要对所有数据包设置此标志,下面的示例中我们只检查 ICMP 数据包。
#!/usr/sbin/nft -f
table bridge tracebridge
delete table bridge tracebridge
table bridge tracebridge {
chain trace {
meta l4proto icmp meta nftrace set 1
}
chain prerouting {
type filter hook prerouting priority -350; policy accept;
jump trace
}
chain postrouting {
type filter hook postrouting priority -350; policy accept;
jump trace
}
}
Saving this file, making it executable, and then running it once will create the
respective tracing chains. You can then inspect the tracing output via the
Proxmox VE Web UI (Firewall > Log) or via nft monitor trace.
保存此文件,使其可执行,然后运行一次,将创建相应的跟踪链。你可以通过 Proxmox VE Web UI(防火墙 > 日志)或通过 nft monitor trace 检查跟踪输出。
The above example traces traffic on all bridges, which is usually where guest
traffic flows through. If you want to examine host traffic, create those chains
in the inet table instead of the bridge table.
上面的示例跟踪了所有桥接的流量,这通常是来宾流量流经的地方。如果你想检查主机流量,请在 inet 表中创建这些链,而不是在 bridge 表中。
|
|
Be aware that this can generate a lot of log spam and slow down the
performance of your networking stack significantly. 请注意,这可能会产生大量日志垃圾,并显著降低你的网络堆栈性能。 |
You can remove the tracing rules via running the following command:
你可以通过运行以下命令来移除跟踪规则:
nft delete table bridge tracebridge
14. User Management
14. 用户管理
Proxmox VE supports multiple authentication sources, for example Linux PAM,
an integrated Proxmox VE authentication server, LDAP, Microsoft Active
Directory and OpenID Connect.
Proxmox VE 支持多种认证源,例如 Linux PAM、集成的 Proxmox VE 认证服务器、LDAP、Microsoft Active Directory 和 OpenID Connect。
By using role-based user and permission management for all objects (VMs,
Storage, nodes, etc.), granular access can be defined.
通过对所有对象(虚拟机、存储、节点等)使用基于角色的用户和权限管理,可以定义细粒度的访问权限。
14.1. Users 14.1. 用户
Proxmox VE stores user attributes in /etc/pve/user.cfg.
Passwords are not stored here; users are instead associated with the
authentication realms described below.
Therefore, a user is often internally identified by their username and
realm in the form <userid>@<realm>.
Proxmox VE 将用户属性存储在 /etc/pve/user.cfg 中。密码不存储在此处;用户与下面描述的认证域相关联。因此,用户通常通过其用户名和域以 <userid>@<realm> 的形式在内部识别。
Each user entry in this file contains the following information:
此文件中的每个用户条目包含以下信息:
-
First name 名字
-
Last name 姓氏
-
E-mail address 电子邮件地址
-
Group memberships 组成员资格
-
An optional expiration date
可选的过期日期 -
A comment or note about this user
关于此用户的评论或备注 -
Whether this user is enabled or disabled
此用户是启用还是禁用 -
Optional two-factor authentication keys
可选的双因素认证密钥
|
|
When you disable or delete a user, or if the expiry date set is
in the past, this user will not be able to log in to new sessions or start new
tasks. All tasks which have already been started by this user (for example,
terminal sessions) will not be terminated automatically by any such event. 当您禁用或删除用户,或者设置的过期日期已过时,该用户将无法登录新会话或启动新任务。该用户已启动的所有任务(例如,终端会话)不会因上述任何事件而自动终止。 |
14.1.1. System administrator
14.1.1. 系统管理员
The system’s root user can always log in via the Linux PAM realm and is an
unconfined administrator. This user cannot be deleted, but attributes can
still be changed. System mails will be sent to the email address
assigned to this user.
系统的 root 用户始终可以通过 Linux PAM 领域登录,并且是一个无限制的管理员。该用户无法被删除,但属性仍然可以更改。系统邮件将发送到分配给该用户的电子邮件地址。
14.2. Groups 14.2. 组
Each user can be a member of several groups. Groups are the preferred
way to organize access permissions. You should always grant permissions
to groups instead of individual users. That way you will get a
much more maintainable access control list.
每个用户可以是多个组的成员。组是组织访问权限的首选方式。您应始终将权限授予组,而不是单个用户。这样,您将获得一个更易维护的访问控制列表。
14.3. API Tokens 14.3. API 代币
API tokens allow stateless access to most parts of the REST API from another
system, software or API client. Tokens can be generated for individual users
and can be given separate permissions and expiration dates to limit the scope
and duration of the access. Should the API token get compromised, it can be
revoked without disabling the user itself.
API 代币允许从另一个系统、软件或 API 客户端无状态地访问大部分 REST API。代币可以为单个用户生成,并可以赋予单独的权限和过期日期,以限制访问的范围和持续时间。如果 API 代币被泄露,可以撤销该代币,而无需禁用用户本身。
API tokens come in two basic types:
API 代币有两种基本类型:
-
Separated privileges: The token needs to be given explicit access with ACLs. Its effective permissions are calculated by intersecting user and token permissions.
分离权限:代币需要通过 ACL 明确授予访问权限。其有效权限是通过用户权限和代币权限的交集计算得出。 -
Full privileges: The token’s permissions are identical to that of the associated user.
完全权限:代币的权限与关联用户的权限完全相同。
|
|
The token value is only displayed/returned once when the token is
generated. It cannot be retrieved again over the API at a later time! 代币值仅在生成时显示/返回一次,之后无法通过 API 再次获取! |
To use an API token, set the HTTP header Authorization to the displayed value
of the form PVEAPIToken=USER@REALM!TOKENID=UUID when making API requests, or
refer to your API client’s documentation.
要使用 API 代币,在发起 API 请求时,将 HTTP 头部 Authorization 设置为显示的值,格式为 PVEAPIToken=USER@REALM!TOKENID=UUID,或参考您的 API 客户端文档。
14.4. Resource Pools 14.4. 资源池
A resource pool is a set of virtual machines, containers, and storage
devices. It is useful for permission handling in cases where certain users
should have controlled access to a specific set of resources, as it allows for a
single permission to be applied to a set of elements, rather than having to
manage this on a per-resource basis. Resource pools are often used in tandem
with groups, so that the members of a group have permissions on a set of
machines and storage.
资源池是一组虚拟机、容器和存储设备。在某些用户需要对特定资源集进行受控访问的情况下,它对于权限管理非常有用,因为它允许对一组元素应用单一权限,而无需逐个资源管理权限。资源池通常与组一起使用,使组成员对一组机器和存储拥有权限。
14.5. Authentication Realms
14.5. 认证域
As Proxmox VE users are just counterparts for users existing on some external
realm, the realms have to be configured in /etc/pve/domains.cfg.
The following realms (authentication methods) are available:
由于 Proxmox VE 用户只是某些外部域中已存在用户的对应体,因此必须在 /etc/pve/domains.cfg 中配置这些域。以下域(认证方法)可用:
-
Linux PAM Standard Authentication
Linux PAM 标准认证 -
Linux PAM is a framework for system-wide user authentication. These users are created on the host system with commands such as adduser. If PAM users exist on the Proxmox VE host system, corresponding entries can be added to Proxmox VE, to allow these users to log in via their system username and password.
Linux PAM 是一个用于系统范围用户认证的框架。这些用户是在主机系统上通过如 adduser 之类的命令创建的。如果 PAM 用户存在于 Proxmox VE 主机系统中,可以将相应条目添加到 Proxmox VE 中,以允许这些用户通过其系统用户名和密码登录。 -
Proxmox VE Authentication Server
Proxmox VE 认证服务器 -
This is a Unix-like password store, which stores hashed passwords in /etc/pve/priv/shadow.cfg. Passwords are hashed using the SHA-256 hashing algorithm. This is the most convenient realm for small-scale (or even mid-scale) installations, where users do not need access to anything outside of Proxmox VE. In this case, users are fully managed by Proxmox VE and are able to change their own passwords via the GUI.
这是一个类 Unix 的密码存储,密码以哈希形式存储在 /etc/pve/priv/shadow.cfg 中。密码使用 SHA-256 哈希算法进行哈希处理。这是小规模(甚至中等规模)安装中最方便的认证域,用户无需访问 Proxmox VE 之外的任何内容。在这种情况下,用户完全由 Proxmox VE 管理,并且能够通过图形界面自行更改密码。 - LDAP
-
LDAP (Lightweight Directory Access Protocol) is an open, cross-platform protocol for authentication using directory services. OpenLDAP is a popular open-source implementations of the LDAP protocol.
LDAP(轻量级目录访问协议)是一种开放的、跨平台的目录服务认证协议。OpenLDAP 是 LDAP 协议的一个流行开源实现。 - Microsoft Active Directory (AD)
-
Microsoft Active Directory (AD) is a directory service for Windows domain networks and is supported as an authentication realm for Proxmox VE. It supports LDAP as an authentication protocol.
Microsoft Active Directory (AD) 是用于 Windows 域网络的目录服务,并被支持作为 Proxmox VE 的认证领域。它支持 LDAP 作为认证协议。 - OpenID Connect
-
OpenID Connect is implemented as an identity layer on top of the OAuth 2.0 protocol. It allows clients to verify the identity of the user, based on authentication performed by an external authorization server.
OpenID Connect 实现为基于 OAuth 2.0 协议之上的身份层。它允许客户端基于外部授权服务器执行的认证来验证用户身份。
14.5.1. Linux PAM Standard Authentication
14.5.1. Linux PAM 标准认证
As Linux PAM corresponds to host system users, a system user must exist on each
node which the user is allowed to log in on. The user authenticates with their
usual system password. This realm is added by default and can’t be removed.
由于 Linux PAM 对应于主机系统用户,因此必须在用户被允许登录的每个节点上存在一个系统用户。用户使用其常用的系统密码进行认证。该领域默认添加,且无法删除。
Password changes via the GUI or, equivalently, the /access/password API
endpoint only apply to the local node and not cluster-wide. Even though Proxmox VE
has a multi-master design, using different passwords for different nodes can
still offer a security benefit.
通过 GUI 或等效的 /access/password API 端点进行的密码更改仅适用于本地节点,而不适用于整个集群。尽管 Proxmox VE 采用多主设计,但为不同节点使用不同密码仍然可以提供安全优势。
In terms of configurability, an administrator can choose to require two-factor
authentication with logins from the realm and to set the realm as the default
authentication realm.
在可配置性方面,管理员可以选择要求来自该领域的登录进行双因素认证,并将该领域设置为默认认证领域。
14.5.2. Proxmox VE Authentication Server
14.5.2. Proxmox VE 认证服务器
The Proxmox VE authentication server realm is a simple Unix-like password store.
The realm is created by default, and as with Linux PAM, the only configuration
items available are the ability to require two-factor authentication for users
of the realm, and to set it as the default realm for login.
Proxmox VE 认证服务器领域是一个简单的类 Unix 密码存储。该领域默认创建,与 Linux PAM 类似,唯一可用的配置项是能够为该领域的用户要求双因素认证,以及将其设置为登录的默认领域。
Unlike the other Proxmox VE realm types, users are created and authenticated entirely
through Proxmox VE, rather than authenticating against another system. Hence, you are
required to set a password for this type of user upon creation.
与其他 Proxmox VE 领域类型不同,用户完全通过 Proxmox VE 创建和认证,而不是通过其他系统进行认证。因此,创建此类用户时必须设置密码。
14.5.3. LDAP
You can also use an external LDAP server for user authentication (for example,
OpenLDAP). In this realm type, users are searched under a Base Domain Name
(base_dn), using the username attribute specified in the User Attribute Name
(user_attr) field.
您还可以使用外部 LDAP 服务器进行用户认证(例如,OpenLDAP)。在此领域类型中,用户在基础域名(base_dn)下搜索,使用用户属性名称(user_attr)字段中指定的用户名属性。
A server and optional fallback server can be configured, and the connection can
be encrypted via SSL. Furthermore, filters can be configured for directories and
groups. Filters allow you to further limit the scope of the realm.
可以配置一个服务器和一个可选的备用服务器,连接可以通过 SSL 加密。此外,还可以为目录和组配置过滤器。过滤器允许您进一步限制领域的范围。
For instance, if a user is represented via the following LDIF dataset:
例如,如果用户通过以下 LDIF 数据集表示:
# user1 of People at ldap-test.com dn: uid=user1,ou=People,dc=ldap-test,dc=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uid: user1 cn: Test User 1 sn: Testers description: This is the first test user.
The Base Domain Name would be ou=People,dc=ldap-test,dc=com and the user
attribute would be uid.
基本域名将是 ou=People,dc=ldap-test,dc=com,用户属性将是 uid。
If Proxmox VE needs to authenticate (bind) to the LDAP server before being
able to query and authenticate users, a bind domain name can be
configured via the bind_dn property in /etc/pve/domains.cfg. Its
password then has to be stored in /etc/pve/priv/realm/<realmname>.pw
(for example, /etc/pve/priv/realm/my-ldap.pw). This file should contain a
single line with the raw password.
如果 Proxmox VE 需要在能够查询和认证用户之前先对 LDAP 服务器进行认证(绑定),则可以通过 /etc/pve/domains.cfg 中的 bind_dn 属性配置绑定域名。其密码随后必须存储在 /etc/pve/priv/realm/<realmname>.pw(例如,/etc/pve/priv/realm/my-ldap.pw)中。该文件应包含一行原始密码。
To verify certificates, you need to set capath. You can set it either
directly to the CA certificate of your LDAP server, or to the system path
containing all trusted CA certificates (/etc/ssl/certs).
Additionally, you need to set the verify option, which can also be done over
the web interface.
要验证证书,您需要设置 capath。您可以将其直接设置为 LDAP 服务器的 CA 证书,或者设置为包含所有受信任 CA 证书的系统路径(/etc/ssl/certs)。此外,您还需要设置 verify 选项,这也可以通过网页界面完成。
The main configuration options for an LDAP server realm are as follows:
LDAP 服务器域的主要配置选项如下:
-
Realm (realm): The realm identifier for Proxmox VE users
域(realm):Proxmox VE 用户的域标识符 -
Base Domain Name (base_dn): The directory which users are searched under
基础域名(base_dn):搜索用户的目录 -
User Attribute Name (user_attr): The LDAP attribute containing the username that users will log in with
用户属性名称(user_attr):包含用户登录用户名的 LDAP 属性 -
Server (server1): The server hosting the LDAP directory
服务器(server1):托管 LDAP 目录的服务器 -
Fallback Server (server2): An optional fallback server address, in case the primary server is unreachable
备用服务器(server2):可选的备用服务器地址,以防主服务器无法访问 -
Port (port): The port that the LDAP server listens on
端口(port):LDAP 服务器监听的端口
|
|
In order to allow a particular user to authenticate using the LDAP server,
you must also add them as a user of that realm from the Proxmox VE server. This can
be carried out automatically with syncing. 为了允许特定用户使用 LDAP 服务器进行认证,您还必须将他们作为该域的用户从 Proxmox VE 服务器中添加。这可以通过同步自动完成。 |
14.5.4. Microsoft Active Directory (AD)
To set up Microsoft AD as a realm, a server address and authentication domain
need to be specified. Active Directory supports most of the same properties as
LDAP, such as an optional fallback server, port, and SSL encryption.
Furthermore, users can be added to Proxmox VE automatically via
sync operations, after configuration.
要将 Microsoft AD 设置为一个域,需要指定服务器地址和认证域。Active Directory 支持与 LDAP 大部分相同的属性,例如可选的备用服务器、端口和 SSL 加密。此外,配置完成后,用户可以通过同步操作自动添加到 Proxmox VE 中。
As with LDAP, if Proxmox VE needs to authenticate before it binds to the AD server,
you must configure the Bind User (bind_dn) property. This property is
typically required by default for Microsoft AD.
与 LDAP 一样,如果 Proxmox VE 在绑定到 AD 服务器之前需要进行认证,则必须配置绑定用户(bind_dn)属性。对于 Microsoft AD,这个属性通常默认是必需的。
The main configuration settings for Microsoft Active Directory are:
Microsoft Active Directory 的主要配置设置包括:
-
Realm (realm): The realm identifier for Proxmox VE users
Realm(realm):Proxmox VE 用户的领域标识符 -
Domain (domain): The AD domain of the server
Domain(domain):服务器的 AD 域 -
Server (server1): The FQDN or IP address of the server
Server(server1):服务器的完全限定域名(FQDN)或 IP 地址 -
Fallback Server (server2): An optional fallback server address, in case the primary server is unreachable
备用服务器(server2):一个可选的备用服务器地址,以防主服务器无法访问 -
Port (port): The port that the Microsoft AD server listens on
端口(port):Microsoft AD 服务器监听的端口
|
|
Microsoft AD normally checks values like usernames without case
sensitivity. To make Proxmox VE do the same, you can disable the default
case-sensitive option by editing the realm in the web UI, or using the CLI
(change the ID with the realm ID):
pveum realm modify ID --case-sensitive 0 Microsoft AD 通常对用户名等值进行不区分大小写的检查。要让 Proxmox VE 也这样做,可以通过在网页界面编辑域,或使用命令行(将 ID 替换为域 ID)禁用默认的区分大小写选项:pveum realm modify ID --case-sensitive 0 |
14.5.5. Syncing LDAP-Based Realms
14.5.5. 同步基于 LDAP 的域
It’s possible to automatically sync users and groups for LDAP-based realms (LDAP
& Microsoft Active Directory), rather than having to add them to Proxmox VE manually.
You can access the sync options from the Add/Edit window of the web interface’s
Authentication panel or via the pveum realm add/modify commands. You can
then carry out the sync operation from the Authentication panel of the GUI or
using the following command:
可以自动同步基于 LDAP 的域(LDAP 和 Microsoft Active Directory)的用户和组,而无需手动将它们添加到 Proxmox VE 中。您可以通过 Web 界面认证面板的添加/编辑窗口访问同步选项,或通过 pveum realm add/modify 命令进行操作。然后,您可以从 GUI 的认证面板执行同步操作,或使用以下命令:
pveum realm sync <realm>
Users and groups are synced to the cluster-wide configuration file,
/etc/pve/user.cfg.
用户和组会同步到集群范围的配置文件 /etc/pve/user.cfg 中。
Attributes to Properties
属性到属性
If the sync response includes user attributes, they will be synced into the
matching user property in the user.cfg. For example: firstname or
lastname.
如果同步响应包含用户属性,它们将同步到 user.cfg 中匹配的用户属性。例如:firstname 或 lastname。
If the names of the attributes are not matching the Proxmox VE properties, you can
set a custom field-to-field map in the config by using the sync_attributes
option.
如果属性名称与 Proxmox VE 属性不匹配,您可以通过使用 sync_attributes 选项在配置中设置自定义字段映射。
How such properties are handled if anything vanishes can be controlled via the
sync options, see below.
如果某些属性消失,如何处理这些属性可以通过同步选项进行控制,详见下文。
Sync Configuration 同步配置
The configuration options for syncing LDAP-based realms can be found in the
Sync Options tab of the Add/Edit window.
基于 LDAP 的领域同步配置选项可以在添加/编辑窗口的同步选项标签中找到。
The configuration options are as follows:
配置选项如下:
-
Bind User (bind_dn): Refers to the LDAP account used to query users and groups. This account needs access to all desired entries. If it’s set, the search will be carried out via binding; otherwise, the search will be carried out anonymously. The user must be a complete LDAP formatted distinguished name (DN), for example, cn=admin,dc=example,dc=com.
绑定用户(bind_dn):指用于查询用户和组的 LDAP 账户。该账户需要访问所有所需的条目。如果设置了该账户,搜索将通过绑定进行;否则,搜索将以匿名方式进行。用户必须是完整的 LDAP 格式的唯一标识名(DN),例如,cn=admin,dc=example,dc=com。 -
Groupname attr. (group_name_attr): Represents the users' groups. Only entries which adhere to the usual character limitations of the user.cfg are synced. Groups are synced with -$realm attached to the name, in order to avoid naming conflicts. Please ensure that a sync does not overwrite manually created groups.
组名属性(group_name_attr):表示用户所属的组。只有符合 user.cfg 常规字符限制的条目会被同步。组名同步时会附加 -$realm,以避免命名冲突。请确保同步不会覆盖手动创建的组。 -
User classes (user_classes): Objects classes associated with users.
用户类(user_classes):与用户关联的对象类。 -
Group classes (group_classes): Objects classes associated with groups.
组类(group_classes):与组相关联的对象类。 -
E-Mail attribute: If the LDAP-based server specifies user email addresses, these can also be included in the sync by setting the associated attribute here. From the command line, this is achievable through the --sync_attributes parameter.
电子邮件属性:如果基于 LDAP 的服务器指定了用户电子邮件地址,也可以通过在此处设置相关属性将其包含在同步中。从命令行来看,可以通过--sync_attributes 参数实现。 -
User Filter (filter): For further filter options to target specific users.
用户过滤器(filter):用于进一步筛选特定用户的选项。 -
Group Filter (group_filter): For further filter options to target specific groups.
组过滤器(group_filter):用于进一步筛选特定组的选项。
|
|
Filters allow you to create a set of additional match criteria, to narrow
down the scope of a sync. Information on available LDAP filter types and their
usage can be found at ldap.com. 过滤器允许您创建一组额外的匹配条件,以缩小同步的范围。有关可用的 LDAP 过滤器类型及其用法的信息,请参见 ldap.com。 |
Sync Options 同步选项
In addition to the options specified in the previous section, you can also
configure further options that describe the behavior of the sync operation.
除了上一节中指定的选项外,您还可以配置描述同步操作行为的其他选项。
These options are either set as parameters before the sync, or as defaults via
the realm option sync-defaults-options.
这些选项可以在同步之前作为参数设置,或者通过领域选项 sync-defaults-options 设置为默认值。
The main options for syncing are:
同步的主要选项有:
-
Scope (scope): The scope of what to sync. It can be either users, groups or both.
范围(scope):同步的范围。可以是用户、组或两者。 -
Enable new (enable-new): If set, the newly synced users are enabled and can log in. The default is true.
启用新用户(enable-new):如果设置,新同步的用户将被启用并可以登录。默认值为 true。 -
Remove Vanished (remove-vanished): This is a list of options which, when activated, determine if they are removed when they are not returned from the sync response. The options are:
移除消失的(remove-vanished):这是一组选项,当激活时,如果同步响应中未返回这些对象,则决定是否将其移除。选项包括:-
ACL (acl): Remove ACLs of users and groups which were not returned returned in the sync response. This most often makes sense together with Entry.
ACL(acl):移除同步响应中未返回的用户和组的 ACL。这通常与 Entry 选项一起使用最为合理。 -
Entry (entry): Removes entries (i.e. users and groups) when they are not returned in the sync response.
Entry(entry):当同步响应中未返回某些条目(即用户和组)时,移除这些条目。 -
Properties (properties): Removes properties of entries where the user in the sync response did not contain those attributes. This includes all properties, even those never set by a sync. Exceptions are tokens and the enable flag, these will be retained even with this option enabled.
Properties(properties):移除同步响应中用户未包含的属性。这包括所有属性,即使是从未通过同步设置过的属性。例外的是令牌和启用标志,即使启用此选项也会保留它们。
-
-
Preview (dry-run): No data is written to the config. This is useful if you want to see which users and groups would get synced to the user.cfg.
Preview(dry-run):不向配置写入任何数据。如果您想查看哪些用户和组将被同步到 user.cfg,这个选项非常有用。
Reserved characters 保留字符
Certain characters are reserved (see RFC2253) and cannot be
easily used in attribute values in DNs without being escaped properly.
某些字符是保留的(参见 RFC2253),在 DN 的属性值中如果不进行适当转义,无法轻易使用。
Following characters need escaping:
以下字符需要转义:
-
Space ( ) at the beginning or end
开头或结尾的空格( ) -
Number sign (#) at the beginning
开头的井号(#) -
Comma (,) 逗号(,)
-
Plus sign (+)
加号(+) -
Double quote (")
双引号(") -
Forward slashes (/)
正斜杠(/) -
Angle brackets (<>)
尖括号(<>) -
Semicolon (;) 分号(;)
-
Equals sign (=)
等号(=)
To use such characters in DNs, surround the attribute value in double quotes.
For example, to bind with a user with the CN (Common Name) Example, User, use
CN="Example, User",OU=people,DC=example,DC=com as value for bind_dn.
要在 DN 中使用此类字符,请将属性值用双引号括起来。例如,要绑定 CN(通用名)为 Example, User 的用户,请使用 CN="Example, User",OU=people,DC=example,DC=com 作为 bind_dn 的值。
This applies to the base_dn, bind_dn, and group_dn attributes.
这适用于 base_dn、bind_dn 和 group_dn 属性。
|
|
Users with colons and forward slashes cannot be synced since these are
reserved characters in usernames. 用户名中包含冒号和斜杠的用户无法同步,因为这些是用户名中的保留字符。 |
14.5.6. OpenID Connect
The main OpenID Connect configuration options are:
主要的 OpenID Connect 配置选项有:
-
Issuer URL (issuer-url): This is the URL of the authorization server. Proxmox VE uses the OpenID Connect Discovery protocol to automatically configure further details.
发行者 URL(issuer-url):这是授权服务器的 URL。Proxmox VE 使用 OpenID Connect 发现协议自动配置更多细节。While it is possible to use unencrypted http:// URLs, we strongly recommend to use encrypted https:// connections.
虽然可以使用未加密的 http:// URL,但我们强烈建议使用加密的 https:// 连接。 -
Realm (realm): The realm identifier for Proxmox VE users
领域(realm):Proxmox VE 用户的领域标识符 -
Client ID (client-id): OpenID Client ID.
客户端 ID(client-id):OpenID 客户端 ID。 -
Client Key (client-key): Optional OpenID Client Key.
客户端密钥(client-key):可选的 OpenID 客户端密钥。 -
Autocreate Users (autocreate): Automatically create users if they do not exist. While authentication is done at the OpenID server, all users still need an entry in the Proxmox VE user configuration. You can either add them manually, or use the autocreate option to automatically add new users.
自动创建用户(autocreate):如果用户不存在,则自动创建用户。虽然认证是在 OpenID 服务器上完成的,但所有用户仍需在 Proxmox VE 用户配置中有一个条目。您可以手动添加用户,或者使用自动创建选项自动添加新用户。 -
Username Claim (username-claim): OpenID claim used to generate the unique username (subject, username or email).
用户名声明(username-claim):用于生成唯一用户名的 OpenID 声明(subject、username 或 email)。 -
Autocreate Groups (groups-autocreate): Create all groups in the claim instead of using existing PVE groups (default behavior).
自动创建组(groups-autocreate):创建声明中的所有组,而不是使用现有的 PVE 组(默认行为)。 -
Groups Claim (groups-claim): OpenID claim used to retrieve the groups from the ID token or userinfo endpoint.
组声明(groups-claim):用于从 ID 令牌或 userinfo 端点检索组的 OpenID 声明。 -
Overwrite Groups (groups-overwrite): Overwrite all groups assigned to user instead of appending to existing groups (default behavior).
覆盖组(groups-overwrite):覆盖分配给用户的所有组,而不是追加到现有组(默认行为)。
Username mapping 用户名映射
The OpenID Connect specification defines a single unique attribute
(claim in OpenID terms) named subject. By default, we use the
value of this attribute to generate Proxmox VE usernames, by simple adding
@ and the realm name: ${subject}@${realm}.
OpenID Connect 规范定义了一个唯一的属性(在 OpenID 术语中称为声明)名为 subject。默认情况下,我们使用该属性的值来生成 Proxmox VE 用户名,方法是简单地添加 @ 和域名:${subject}@${realm}。
Unfortunately, most OpenID servers use random strings for subject, like
DGH76OKH34BNG3245SB, so a typical username would look like
DGH76OKH34BNG3245SB@yourrealm. While unique, it is difficult for
humans to remember such random strings, making it quite impossible to
associate real users with this.
不幸的是,大多数 OpenID 服务器对 subject 使用随机字符串,如 DGH76OKH34BNG3245SB,因此典型的用户名看起来像 DGH76OKH34BNG3245SB@yourrealm。虽然唯一,但人类很难记住这样的随机字符串,因此几乎不可能将真实用户与之关联。
The username-claim setting allows you to use other attributes for
the username mapping. Setting it to username is preferred if the
OpenID Connect server provides that attribute and guarantees its
uniqueness.
username-claim 设置允许您使用其他属性进行用户名映射。如果 OpenID Connect 服务器提供该属性并保证其唯一性,建议将其设置为 username。
Another option is to use email, which also yields human readable
usernames. Again, only use this setting if the server guarantees the
uniqueness of this attribute.
另一种选择是使用 email,这也能生成易于阅读的用户名。同样,只有在服务器保证该属性唯一性的情况下才使用此设置。
Groups mapping 组映射
Specifying the groups-claim setting in the OpenID configuration enables group
mapping functionality. The data provided in the groups-claim should be
a list of strings that correspond to groups that a user should be a member of in
Proxmox VE. To prevent collisions, group names from the OpenID claim are suffixed
with -<realm name> (e.g. for the OpenID group name my-openid-group in the
realm oidc, the group name in Proxmox VE would be my-openid-group-oidc).
在 OpenID 配置中指定 groups-claim 设置可启用组映射功能。groups-claim 中提供的数据应为字符串列表,对应用户应在 Proxmox VE 中所属的组。为防止冲突,OpenID 声明中的组名会加上 -<领域名> 后缀(例如,对于领域 oidc 中的 OpenID 组名 my-openid-group,Proxmox VE 中的组名将是 my-openid-group-oidc)。
Any groups reported by the OpenID provider that do not exist in Proxmox VE are
ignored by default. If all groups reported by the OpenID provider should exist
in Proxmox VE, the groups-autocreate option may be used to automatically create
these groups on user logins.
默认情况下,OpenID 提供者报告的任何在 Proxmox VE 中不存在的组都会被忽略。如果希望 OpenID 提供者报告的所有组都存在于 Proxmox VE 中,可以使用 groups-autocreate 选项,在用户登录时自动创建这些组。
By default, groups are appended to the user’s existing groups. It may be
desirable to overwrite any groups that the user is already a member in Proxmox VE
with those from the OpenID provider. Enabling the groups-overwrite setting
removes all groups from the user in Proxmox VE before adding the groups reported by
the OpenID provider.
默认情况下,组会被追加到用户现有的组中。也可以选择用 OpenID 提供者的组覆盖用户在 Proxmox VE 中已属于的组。启用 groups-overwrite 设置会在添加 OpenID 提供者报告的组之前,先移除用户在 Proxmox VE 中的所有组。
In some cases, OpenID servers may send groups claims which include invalid
characters for Proxmox VE group IDs. Any groups that contain characters not allowed
in a Proxmox VE group name are not included and a warning will be sent to the logs.
在某些情况下,OpenID 服务器可能会发送包含 Proxmox VE 组 ID 不允许字符的组声明。任何包含 Proxmox VE 组名不允许字符的组都不会被包含,并且会向日志发送警告。
Advanced settings 高级设置
-
Query userinfo endpoint (query-userinfo): Enabling this option requires the OpenID Connect authenticator to query the "userinfo" endpoint for claim values. Disabling this option is useful for some identity providers that do not support the "userinfo" endpoint (e.g. ADFS).
查询 userinfo 端点(query-userinfo):启用此选项需要 OpenID Connect 认证器查询“userinfo”端点以获取声明值。禁用此选项对于某些不支持“userinfo”端点的身份提供者(例如 ADFS)非常有用。
Examples 示例
Here is an example of creating an OpenID realm using Google. You need to
replace --client-id and --client-key with the values
from your Google OpenID settings.
下面是使用 Google 创建 OpenID 领域的示例。您需要将 --client-id 和 --client-key 替换为您在 Google OpenID 设置中的值。
pveum realm add myrealm1 --type openid --issuer-url https://accounts.google.com --client-id XXXX --client-key YYYY --username-claim email
The above command uses --username-claim email, so that the usernames on the
Proxmox VE side look like example.user@google.com@myrealm1.
上述命令使用了 --username-claim email,因此 Proxmox VE 端的用户名看起来像 example.user@google.com@myrealm1。
Keycloak (https://www.keycloak.org/) is a popular open source Identity
and Access Management tool, which supports OpenID Connect. In the following
example, you need to replace the --issuer-url and --client-id with
your information:
Keycloak(https://www.keycloak.org/)是一个流行的开源身份和访问管理工具,支持 OpenID Connect。以下示例中,您需要将 --issuer-url 和 --client-id 替换为您的信息:
pveum realm add myrealm2 --type openid --issuer-url https://your.server:8080/realms/your-realm --client-id XXX --username-claim username
Using --username-claim username enables simple usernames on the
Proxmox VE side, like example.user@myrealm2.
使用 --username-claim username 可以使 Proxmox VE 端的用户名更简单,如 example.user@myrealm2。
|
|
You need to ensure that the user is not allowed to edit
the username setting themselves (on the Keycloak server). 您需要确保用户不被允许自行编辑用户名设置(在 Keycloak 服务器上)。 |
14.6. Two-Factor Authentication
14.6. 双因素认证
There are two ways to use two-factor authentication:
使用双因素认证有两种方式:
It can be required by the authentication realm, either via TOTP
(Time-based One-Time Password) or YubiKey OTP. In this case, a newly
created user needs to have their keys added immediately, as there is no way to
log in without the second factor. In the case of TOTP, users can
also change the TOTP later on, provided they can log in first.
它可以由认证域要求,方式为 TOTP(基于时间的一次性密码)或 YubiKey OTP。在这种情况下,新创建的用户需要立即添加他们的密钥,因为没有第二因素就无法登录。对于 TOTP,用户也可以在先登录的前提下,之后更改 TOTP。
Alternatively, users can choose to opt-in to two-factor authentication
later on, even if the realm does not enforce it.
或者,用户也可以选择稍后启用双因素认证,即使该域不强制要求。
14.6.1. Available Second Factors
14.6.1. 可用的第二因素
You can set up multiple second factors, in order to avoid a situation in
which losing your smartphone or security key locks you out of your
account permanently.
您可以设置多个第二因素,以避免因丢失智能手机或安全密钥而永久无法访问账户的情况。
The following two-factor authentication methods are available in
addition to realm-enforced TOTP and YubiKey OTP:
除了域强制的 TOTP 和 YubiKey OTP 之外,还提供以下双因素认证方法:
-
User configured TOTP (Time-based One-Time Password). A short code derived from a shared secret and the current time, it changes every 30 seconds.
用户配置的 TOTP(基于时间的一次性密码)。这是一个由共享密钥和当前时间生成的短码,每 30 秒变化一次。 -
WebAuthn (Web Authentication). A general standard for authentication. It is implemented by various security devices, like hardware keys or trusted platform modules (TPM) from a computer or smart phone.
WebAuthn(网络认证)。一种通用的认证标准。它由各种安全设备实现,如硬件密钥或计算机或智能手机中的可信平台模块(TPM)。 -
Single use Recovery Keys. A list of keys which should either be printed out and locked in a secure place or saved digitally in an electronic vault. Each key can be used only once. These are perfect for ensuring that you are not locked out, even if all of your other second factors are lost or corrupt.
一次性恢复密钥。密钥列表,应打印出来并锁在安全的地方,或以电子方式保存在电子保险库中。每个密钥只能使用一次。这些密钥非常适合确保即使所有其他第二因素丢失或损坏,也不会被锁定在外。
Before WebAuthn was supported, U2F could be setup by the user. Existing
U2F factors can still be used, but it is recommended to switch to
WebAuthn, once it is configured on the server.
在支持 WebAuthn 之前,用户可以设置 U2F。现有的 U2F 因素仍然可以使用,但建议在服务器配置 WebAuthn 后切换到 WebAuthn。
14.6.2. Realm Enforced Two-Factor Authentication
14.6.2. 领域强制双因素认证
This can be done by selecting one of the available methods via the
TFA dropdown box when adding or editing an Authentication Realm.
When a realm has TFA enabled, it becomes a requirement, and only users
with configured TFA will be able to log in.
这可以通过在添加或编辑认证领域时,从 TFA 下拉框中选择可用的方法来实现。当某个领域启用 TFA 后,TFA 即成为必需,只有配置了 TFA 的用户才能登录。
Currently there are two methods available:
目前有两种可用的方法:
- Time-based OATH (TOTP) 基于时间的一次性密码(TOTP)
-
This uses the standard HMAC-SHA1 algorithm, where the current time is hashed with the user’s configured key. The time step and password length parameters are configurable.
这使用了标准的 HMAC-SHA1 算法,其中当前时间与用户配置的密钥进行哈希。时间步长和密码长度参数是可配置的。A user can have multiple keys configured (separated by spaces), and the keys can be specified in Base32 (RFC3548) or hexadecimal notation.
用户可以配置多个密钥(用空格分隔),密钥可以以 Base32(RFC3548)或十六进制表示法指定。Proxmox VE provides a key generation tool (oathkeygen) which prints out a random key in Base32 notation, that can be used directly with various OTP tools, such as the oathtool command-line tool, or on Android Google Authenticator, FreeOTP, andOTP or similar applications.
Proxmox VE 提供了一个密钥生成工具(oathkeygen),该工具以 Base32 表示法打印出一个随机密钥,可直接用于各种 OTP 工具,如 oathtool 命令行工具,或 Android 上的 Google Authenticator、FreeOTP、andOTP 等类似应用。 - YubiKey OTP
-
For authenticating via a YubiKey a Yubico API ID, API KEY and validation server URL must be configured, and users must have a YubiKey available. In order to get the key ID from a YubiKey, you can trigger the YubiKey once after connecting it via USB, and copy the first 12 characters of the typed password into the user’s Key IDs field.
通过 YubiKey 进行认证时,必须配置 Yubico API ID、API KEY 和验证服务器 URL,且用户必须拥有可用的 YubiKey。为了从 YubiKey 获取密钥 ID,您可以在通过 USB 连接后触发一次 YubiKey,并将输入密码的前 12 个字符复制到用户的密钥 ID 字段中。
Please refer to the YubiKey OTP
documentation for how to use the
YubiCloud or
host your own verification server.
请参阅 YubiKey OTP 文档,了解如何使用 YubiCloud 或自行托管验证服务器。
14.6.3. Limits and Lockout of Two-Factor Authentication
14.6.3. 双因素认证的限制与锁定
A second factor is meant to protect users if their password is somehow leaked
or guessed. However, some factors could still be broken by brute force. For
this reason, users will be locked out after too many failed 2nd factor login
attempts.
第二因素旨在保护用户,防止其密码被泄露或猜测。然而,某些因素仍可能被暴力破解。因此,用户在多次第二因素登录失败后将被锁定。
For TOTP, 8 failed attempts will disable the user’s TOTP factors. They are
unlocked when logging in with a recovery key. If TOTP was the only available
factor, admin intervention is required, and it is highly recommended to require
the user to change their password immediately.
对于 TOTP,连续 8 次失败尝试将禁用用户的 TOTP 因素。使用恢复密钥登录时会解锁。如果 TOTP 是唯一可用的因素,则需要管理员干预,并强烈建议要求用户立即更改密码。
Since FIDO2/Webauthn and recovery keys are less susceptible to brute force
attacks, the limit there is higher (100 tries), but all second factors are
blocked for an hour when exceeded.
由于 FIDO2/Webauthn 和恢复密钥不易受到暴力破解攻击,因此尝试次数限制较高(100 次),但超过限制后,所有第二因素将被阻止一小时。
An admin can unlock a user’s Two-Factor Authentication at any time via the user
list in the UI or the command line:
管理员可以随时通过 UI 中的用户列表或命令行解锁用户的双因素认证:
pveum user tfa unlock joe@pve
14.6.4. User Configured TOTP Authentication
14.6.4. 用户配置的 TOTP 认证
Users can choose to enable TOTP or WebAuthn as a second factor on login, via
the TFA button in the user list (unless the realm enforces YubiKey OTP).
用户可以通过用户列表中的 TFA 按钮选择启用 TOTP 或 WebAuthn 作为登录的第二因素(除非领域强制使用 YubiKey OTP)。
Users can always add and use one time Recovery Keys.
用户始终可以添加并使用一次性恢复密钥。
After opening the TFA window, the user is presented with a dialog to set up
TOTP authentication. The Secret field contains the key, which can be
randomly generated via the Randomize button. An optional Issuer Name can be
added to provide information to the TOTP app about what the key belongs to.
Most TOTP apps will show the issuer name together with the corresponding
OTP values. The username is also included in the QR code for the TOTP app.
打开 TFA 窗口后,用户会看到一个设置 TOTP 认证的对话框。Secret 字段包含密钥,可以通过“随机生成”按钮随机生成。可以添加一个可选的发行者名称,以向 TOTP 应用提供该密钥所属的信息。大多数 TOTP 应用会将发行者名称与相应的 OTP 值一起显示。用户名也包含在 TOTP 应用的二维码中。
After generating a key, a QR code will be displayed, which can be used with most
OTP apps such as FreeOTP. The user then needs to verify the current user
password (unless logged in as root), as well as the ability to correctly use
the TOTP key, by typing the current OTP value into the Verification Code
field and pressing the Apply button.
生成密钥后,会显示一个二维码,可用于大多数 OTP 应用,如 FreeOTP。然后,用户需要验证当前用户密码(除非以 root 身份登录),并通过在“验证码”字段中输入当前 OTP 值并按“应用”按钮,验证正确使用 TOTP 密钥的能力。
14.6.5. TOTP
There is no server setup required. Simply install a TOTP app on your
smartphone (for example, FreeOTP) and use
the Proxmox VE web interface to add a TOTP factor.
无需服务器设置。只需在智能手机上安装一个 TOTP 应用(例如,FreeOTP),然后使用 Proxmox VE 网页界面添加 TOTP 因素。
14.6.6. WebAuthn
For WebAuthn to work, you need to have two things:
要使 WebAuthn 正常工作,您需要具备两样东西:
-
A trusted HTTPS certificate (for example, by using Let’s Encrypt). While it probably works with an untrusted certificate, some browsers may warn or refuse WebAuthn operations if it is not trusted.
受信任的 HTTPS 证书(例如,使用 Let’s Encrypt)。虽然使用不受信任的证书可能也能工作,但如果证书不受信任,某些浏览器可能会警告或拒绝 WebAuthn 操作。 -
Setup the WebAuthn configuration (see Datacenter → Options → WebAuthn Settings in the Proxmox VE web interface). This can be auto-filled in most setups.
设置 WebAuthn 配置(参见 Proxmox VE 网页界面中的数据中心 → 选项 → WebAuthn 设置)。在大多数设置中,这可以自动填写。
Once you have fulfilled both of these requirements, you can add a WebAuthn
configuration in the Two Factor panel under Datacenter → Permissions → Two
Factor.
一旦满足了这两个要求,您就可以在数据中心 → 权限 → 双因素的双因素面板中添加 WebAuthn 配置。
14.6.7. Recovery Keys 14.6.7. 恢复密钥
Recovery key codes do not need any preparation; you can simply create a
set of recovery keys in the Two Factor panel under Datacenter → Permissions
→ Two Factor.
恢复密钥代码无需任何准备;您只需在数据中心 → 权限 → 双因素的双因素面板中创建一组恢复密钥。
|
|
There can only be one set of single-use recovery keys per user at any
time. 每个用户在任何时候只能有一组一次性恢复密钥。 |
14.6.8. Server Side Webauthn Configuration
14.6.8. 服务器端 Webauthn 配置
To allow users to use WebAuthn authentication, it is necessary to use a valid
domain with a valid SSL certificate, otherwise some browsers may warn or refuse
to authenticate altogether.
为了允许用户使用 WebAuthn 认证,必须使用带有有效 SSL 证书的有效域名,否则某些浏览器可能会发出警告或完全拒绝认证。
|
|
Changing the WebAuthn configuration may render all existing WebAuthn
registrations unusable! 更改 WebAuthn 配置可能会导致所有现有的 WebAuthn 注册无法使用! |
This is done via /etc/pve/datacenter.cfg. For instance:
这是通过 /etc/pve/datacenter.cfg 完成的。例如:
webauthn: rp=mypve.example.com,origin=https://mypve.example.com:8006,id=mypve.example.com
14.6.9. Server Side U2F Configuration
14.6.9. 服务器端 U2F 配置
|
|
It is recommended to use WebAuthn instead. 建议改用 WebAuthn。 |
To allow users to use U2F authentication, it may be necessary to use a valid
domain with a valid SSL certificate, otherwise, some browsers may print
a warning or reject U2F usage altogether. Initially, an AppId
[53]
needs to be configured.
为了允许用户使用 U2F 认证,可能需要使用带有有效 SSL 证书的有效域名,否则某些浏览器可能会显示警告或完全拒绝使用 U2F。最初,需要配置一个 AppId [53]。
|
|
Changing the AppId will render all existing U2F registrations
unusable! 更改 AppId 将导致所有现有的 U2F 注册无法使用! |
This is done via /etc/pve/datacenter.cfg. For instance:
这通过 /etc/pve/datacenter.cfg 来完成。例如:
u2f: appid=https://mypve.example.com:8006
For a single node, the AppId can simply be the address of the web interface,
exactly as it is used in the browser, including the https:// and the port, as
shown above. Please note that some browsers may be more strict than others when
matching AppIds.
对于单个节点,AppId 可以简单地设置为 Web 界面的地址,完全按照浏览器中使用的方式,包括 https://和端口,如上所示。请注意,某些浏览器在匹配 AppId 时可能比其他浏览器更严格。
When using multiple nodes, it is best to have a separate https server
providing an appid.json
[54]
file, as it seems to be compatible with most
browsers. If all nodes use subdomains of the same top level domain, it may be
enough to use the TLD as AppId. It should however be noted that some browsers
may not accept this.
当使用多个节点时,最好有一个单独的 https 服务器提供 appid.json [54] 文件,因为它似乎与大多数浏览器兼容。如果所有节点使用同一顶级域的子域,使用顶级域作为 AppId 可能就足够了。但应注意,有些浏览器可能不接受这种做法。
|
|
A bad AppId will usually produce an error, but we have encountered
situations when this does not happen, particularly when using a top level domain
AppId for a node that is accessed via a subdomain in Chromium. For this reason
it is recommended to test the configuration with multiple browsers, as changing
the AppId later will render existing U2F registrations unusable. 错误的 AppId 通常会产生错误,但我们遇到过某些情况下不会出现错误,特别是在 Chromium 中使用顶级域 AppId 访问通过子域访问的节点时。因此,建议使用多个浏览器测试配置,因为之后更改 AppId 会导致现有的 U2F 注册无法使用。 |
14.6.10. Activating U2F as a User
14.6.10. 作为用户激活 U2F
To enable U2F authentication, open the TFA window’s U2F tab, type in the
current password (unless logged in as root), and press the Register button.
If the server is set up correctly and the browser accepts the server’s provided
AppId, a message will appear prompting the user to press the button on the
U2F device (if it is a YubiKey, the button light should be toggling on and
off steadily, roughly twice per second).
要启用 U2F 认证,打开 TFA 窗口的 U2F 标签页,输入当前密码(除非以 root 身份登录),然后按下注册按钮。如果服务器设置正确且浏览器接受服务器提供的 AppId,将出现提示消息,要求用户按下 U2F 设备上的按钮(如果是 YubiKey,按钮灯应以大约每秒两次的频率稳定闪烁)。
Firefox users may need to enable security.webauth.u2f via about:config
before they can use a U2F token.
Firefox 用户可能需要通过 about:config 启用 security.webauth.u2f,才能使用 U2F 代币。
14.7. Permission Management
14.7. 权限管理
In order for a user to perform an action (such as listing, modifying or
deleting parts of a VM’s configuration), the user needs to have the
appropriate permissions.
为了让用户执行某个操作(例如列出、修改或删除虚拟机配置的部分内容),用户需要拥有相应的权限。
Proxmox VE uses a role and path based permission management system. An entry in
the permissions table allows a user, group or token to take on a specific role
when accessing an object or path. This means that such an access rule can
be represented as a triple of (path, user, role), (path, group,
role) or (path, token, role), with the role containing a set of allowed
actions, and the path representing the target of these actions.
Proxmox VE 使用基于角色和路径的权限管理系统。权限表中的一条条目允许用户、组或代币在访问某个对象或路径时承担特定角色。这意味着这样的访问规则可以表示为(路径,用户,角色)、(路径,组,角色)或(路径,代币,角色)三元组,其中角色包含一组允许的操作,路径表示这些操作的目标。
14.7.1. Roles 14.7.1. 角色
A role is simply a list of privileges. Proxmox VE comes with a number
of predefined roles, which satisfy most requirements.
角色只是权限的列表。Proxmox VE 提供了许多预定义角色,满足大多数需求。
-
Administrator: has full privileges
Administrator:拥有全部权限 -
NoAccess: has no privileges (used to forbid access)
NoAccess:没有任何权限(用于禁止访问) -
PVEAdmin: can do most tasks, but has no rights to modify system settings (Sys.PowerMgmt, Sys.Modify, Realm.Allocate) or permissions (Permissions.Modify)
PVEAdmin:可以执行大多数任务,但无权修改系统设置(Sys.PowerMgmt、Sys.Modify、Realm.Allocate)或权限(Permissions.Modify) -
PVEAuditor: has read only access
PVEAuditor:仅具有只读访问权限 -
PVEDatastoreAdmin: create and allocate backup space and templates
PVEDatastoreAdmin:创建和分配备份空间及模板 -
PVEDatastoreUser: allocate backup space and view storage
PVEDatastoreUser:分配备份空间并查看存储 -
PVEMappingAdmin: manage resource mappings
PVEMappingAdmin:管理资源映射 -
PVEMappingUser: view and use resource mappings
PVEMappingUser:查看和使用资源映射 -
PVEPoolAdmin: allocate pools
PVEPoolAdmin:分配资源池 -
PVEPoolUser: view pools PVEPoolUser:查看资源池
-
PVESDNAdmin: manage SDN configuration
PVESDNAdmin:管理 SDN 配置 -
PVESDNUser: access to bridges/vnets
PVESDNUser:访问桥接/虚拟网络 -
PVESysAdmin: audit, system console and system logs
PVESysAdmin:审计、系统控制台和系统日志 -
PVETemplateUser: view and clone templates
PVETemplateUser:查看和克隆模板 -
PVEUserAdmin: manage users
PVEUserAdmin:管理用户 -
PVEVMAdmin: fully administer VMs
PVEVMAdmin:完全管理虚拟机 -
PVEVMUser: view, backup, configure CD-ROM, VM console, VM power management
PVEVMUser:查看、备份、配置光驱、虚拟机控制台、虚拟机电源管理
You can see the whole set of predefined roles in the GUI.
您可以在图形界面中查看所有预定义角色。
You can add new roles via the GUI or the command line.
您可以通过图形界面或命令行添加新角色。
From the GUI, navigate to the Permissions → Roles tab from Datacenter and
click on the Create button. There you can set a role name and select any
desired privileges from the Privileges drop-down menu.
在图形界面中,导航到数据中心的权限 → 角色标签页,然后点击创建按钮。在那里,您可以设置角色名称并从权限下拉菜单中选择所需的权限。
To add a role through the command line, you can use the pveum CLI tool, for
example:
要通过命令行添加角色,您可以使用 pveum CLI 工具,例如:
pveum role add VM_Power-only --privs "VM.PowerMgmt VM.Console" pveum role add Sys_Power-only --privs "Sys.PowerMgmt Sys.Console"
|
|
Roles starting with PVE are always builtin, custom roles are not
allowed use this reserved prefix. 以 PVE 开头的角色始终是内置的,自定义角色不允许使用此保留前缀。 |
14.7.2. Privileges 14.7.2. 权限
A privilege is the right to perform a specific action. To simplify
management, lists of privileges are grouped into roles, which can then
be used in the permission table. Note that privileges cannot be directly
assigned to users and paths without being part of a role.
权限是执行特定操作的权利。为了简化管理,权限列表被分组到角色中,然后可以在权限表中使用这些角色。请注意,权限不能直接分配给用户和路径,必须作为角色的一部分。
We currently support the following privileges:
我们目前支持以下权限:
-
Node / System related privileges
节点 / 系统相关权限 -
-
Group.Allocate: create/modify/remove groups
Group.Allocate:创建/修改/删除组 -
Mapping.Audit: view resource mappings
Mapping.Audit:查看资源映射 -
Mapping.Modify: manage resource mappings
Mapping.Modify:管理资源映射 -
Mapping.Use: use resource mappings
Mapping.Use:使用资源映射 -
Permissions.Modify: modify access permissions
Permissions.Modify:修改访问权限 -
Pool.Allocate: create/modify/remove a pool
Pool.Allocate:创建/修改/删除资源池 -
Pool.Audit: view a pool
Pool.Audit:查看资源池 -
Realm.AllocateUser: assign user to a realm
Realm.AllocateUser:将用户分配到领域 -
Realm.Allocate: create/modify/remove authentication realms
Realm.Allocate:创建/修改/删除认证域 -
SDN.Allocate: manage SDN configuration
SDN.Allocate:管理 SDN 配置 -
SDN.Audit: view SDN configuration
SDN.Audit:查看 SDN 配置 -
Sys.Audit: view node status/config, Corosync cluster config, and HA config
Sys.Audit:查看节点状态/配置、Corosync 集群配置和 HA 配置 -
Sys.Console: console access to node
Sys.Console:节点的控制台访问 -
Sys.Incoming: allow incoming data streams from other clusters (experimental)
Sys.Incoming:允许来自其他集群的传入数据流(实验性) -
Sys.Modify: create/modify/remove node network parameters
Sys.Modify:创建/修改/删除节点网络参数 -
Sys.PowerMgmt: node power management (start, stop, reset, shutdown, …)
Sys.PowerMgmt:节点电源管理(启动、停止、重置、关机等) -
Sys.Syslog: view syslog Sys.Syslog:查看系统日志
-
User.Modify: create/modify/remove user access and details.
User.Modify:创建/修改/删除用户访问权限和详细信息
-
-
Virtual machine related privileges
虚拟机相关权限 -
-
SDN.Use: access SDN vnets and local network bridges
SDN.Use:访问 SDN 虚拟网络和本地网络桥接器 -
VM.Allocate: create/remove VM on a server
VM.Allocate:在服务器上创建/删除虚拟机 -
VM.Audit: view VM config
VM.Audit:查看虚拟机配置 -
VM.Backup: backup/restore VMs
VM.Backup:备份/恢复虚拟机 -
VM.Clone: clone/copy a VM
VM.Clone:克隆/复制虚拟机 -
VM.Config.CDROM: eject/change CD-ROM
VM.Config.CDROM:弹出/更换光驱 -
VM.Config.CPU: modify CPU settings
VM.Config.CPU:修改 CPU 设置 -
VM.Config.Cloudinit: modify Cloud-init parameters
VM.Config.Cloudinit:修改 Cloud-init 参数 -
VM.Config.Disk: add/modify/remove disks
VM.Config.Disk:添加/修改/删除磁盘 -
VM.Config.HWType: modify emulated hardware types
VM.Config.HWType:修改模拟硬件类型 -
VM.Config.Memory: modify memory settings
VM.Config.Memory:修改内存设置 -
VM.Config.Network: add/modify/remove network devices
VM.Config.Network:添加/修改/删除网络设备 -
VM.Config.Options: modify any other VM configuration
VM.Config.Options:修改任何其他虚拟机配置 -
VM.Console: console access to VM
VM.Console:虚拟机的控制台访问 -
VM.Migrate: migrate VM to alternate server on cluster
VM.Migrate:将虚拟机迁移到集群中的备用服务器 -
VM.Monitor: access to VM monitor (kvm)
VM.Monitor:访问虚拟机监视器(kvm) -
VM.PowerMgmt: power management (start, stop, reset, shutdown, …)
VM.PowerMgmt:电源管理(启动、停止、重置、关机等) -
VM.Snapshot.Rollback: rollback VM to one of its snapshots
VM.Snapshot.Rollback:将虚拟机回滚到其某个快照 -
VM.Snapshot: create/delete VM snapshots
VM.Snapshot:创建/删除虚拟机快照
-
-
Storage related privileges
存储相关权限 -
-
Datastore.Allocate: create/modify/remove a datastore and delete volumes
Datastore.Allocate:创建/修改/删除数据存储并删除卷 -
Datastore.AllocateSpace: allocate space on a datastore
Datastore.AllocateSpace:在数据存储上分配空间 -
Datastore.AllocateTemplate: allocate/upload templates and ISO images
Datastore.AllocateTemplate:分配/上传模板和 ISO 镜像 -
Datastore.Audit: view/browse a datastore
Datastore.Audit:查看/浏览数据存储
-
|
|
Both Permissions.Modify and Sys.Modify should be handled with
care, as they allow modifying aspects of the system and its configuration that
are dangerous or sensitive. Permissions.Modify 和 Sys.Modify 都应谨慎处理,因为它们允许修改系统及其配置中危险或敏感的部分。 |
|
|
Carefully read the section about inheritance below to understand how
assigned roles (and their privileges) are propagated along the ACL tree. 请仔细阅读下面关于继承的部分,以了解分配的角色(及其权限)如何沿着 ACL 树传播。 |
14.7.3. Objects and Paths
14.7.3. 对象和路径
Access permissions are assigned to objects, such as virtual machines,
storages or resource pools.
We use file system like paths to address these objects. These paths form a
natural tree, and permissions of higher levels (shorter paths) can
optionally be propagated down within this hierarchy.
访问权限被分配给对象,例如虚拟机、存储或资源池。我们使用类似文件系统的路径来定位这些对象。这些路径形成一个自然的树状结构,高层级(路径较短)的权限可以选择性地在该层级结构中向下传播。
Paths can be templated. When an API call requires permissions on a
templated path, the path may contain references to parameters of the API
call. These references are specified in curly braces. Some parameters are
implicitly taken from the API call’s URI. For instance, the permission path
/nodes/{node} when calling /nodes/mynode/status requires permissions on
/nodes/mynode, while the path {path} in a PUT request to /access/acl
refers to the method’s path parameter.
路径可以是模板化的。当 API 调用需要对模板化路径的权限时,路径中可能包含对 API 调用参数的引用。这些引用用大括号括起来。有些参数是隐式从 API 调用的 URI 中获取的。例如,调用 /nodes/mynode/status 时,权限路径 /nodes/{node} 需要对 /nodes/mynode 拥有权限,而在对 /access/acl 的 PUT 请求中,路径 {path} 指的是该方法的路径参数。
Some examples are: 一些示例包括:
-
/nodes/{node}: Access to Proxmox VE server machines
/nodes/{node}:访问 Proxmox VE 服务器机器 -
/vms: Covers all VMs
/vms:涵盖所有虚拟机 -
/vms/{vmid}: Access to specific VMs
/vms/{vmid}:访问特定虚拟机 -
/storage/{storeid}: Access to a specific storage
/ storage/{storeid}:访问特定存储 -
/pool/{poolname}: Access to resources contained in a specific pool
/ pool/{poolname}:访问特定资源池中的资源 -
/access/groups: Group administration
/ access/groups:组管理 -
/access/realms/{realmid}: Administrative access to realms
/ access/realms/{realmid}:对领域的管理访问
Inheritance 继承
As mentioned earlier, object paths form a file system like tree, and
permissions can be inherited by objects down that tree (the propagate flag is
set by default). We use the following inheritance rules:
如前所述,对象路径形成类似文件系统的树状结构,权限可以被该树下的对象继承(传播标志默认设置)。我们使用以下继承规则:
-
Permissions for individual users always replace group permissions.
单个用户的权限总是替代组权限。 -
Permissions for groups apply when the user is member of that group.
当用户是该组成员时,组权限生效。 -
Permissions on deeper levels replace those inherited from an upper level.
更深层级的权限会替代从上层继承的权限。 -
NoAccess cancels all other roles on a given path.
NoAccess 会取消给定路径上的所有其他角色。
Additionally, privilege separated tokens can never have permissions on any
given path that their associated user does not have.
此外,权限分离的令牌在任何给定路径上的权限都不能超过其关联用户所拥有的权限。
14.7.4. Pools 14.7.4. 资源池
Pools can be used to group a set of virtual machines and datastores. You can
then simply set permissions on pools (/pool/{poolid}), which are inherited by
all pool members. This is a great way to simplify access control.
池可以用来将一组虚拟机和数据存储分组。然后,您只需在池(/pool/{poolid})上设置权限,所有池成员都会继承这些权限。这是简化访问控制的好方法。
14.7.5. Which Permissions Do I Need?
14.7.5. 我需要哪些权限?
The required API permissions are documented for each individual
method, and can be found at https://pve.proxmox.com/pve-docs/api-viewer/.
所需的 API 权限在每个具体方法的文档中都有说明,可以在 https://pve.proxmox.com/pve-docs/api-viewer/ 找到。
The permissions are specified as a list, which can be interpreted as a
tree of logic and access-check functions:
权限以列表形式指定,可以将其解释为逻辑和访问检查函数的树结构:
-
["and", <subtests>...] and ["or", <subtests>...]
["and", <子测试>...] 和 ["or", <子测试>...] -
Each(and) or any(or) further element in the current list has to be true.
当前列表中的每个(and)或任意(or)元素必须为真。 -
["perm", <path>, [ <privileges>... ], <options>...]
["perm", <路径>, [ <权限>... ], <选项>...] -
The path is a templated parameter (see Objects and Paths). All (or, if the any option is used, any) of the listed privileges must be allowed on the specified path. If a require-param option is specified, then its specified parameter is required even if the API call’s schema otherwise lists it as being optional.
路径是一个模板参数(参见对象和路径)。必须允许在指定路径上拥有所有(或如果使用了 any 选项,则为任意)列出的权限。如果指定了 require-param 选项,则即使 API 调用的模式将其列为可选,也必须提供其指定的参数。 - ["userid-group", [ <privileges>... ], <options>...]
-
The caller must have any of the listed privileges on /access/groups. In addition, there are two possible checks, depending on whether the groups_param option is set:
调用者必须在 /access/groups 上拥有列表中任意一个权限。此外,根据是否设置了 groups_param 选项,有两种可能的检查方式:-
groups_param is set: The API call has a non-optional groups parameter and the caller must have any of the listed privileges on all of the listed groups.
设置了 groups_param:API 调用包含一个非可选的 groups 参数,调用者必须在所有列出的组上拥有列表中任意一个权限。 -
groups_param is not set: The user passed via the userid parameter must exist and be part of a group on which the caller has any of the listed privileges (via the /access/groups/<group> path).
未设置 groups_param:通过 userid 参数传递的用户必须存在,并且属于调用者在 /access/groups/<group> 路径上拥有列表中任意一个权限的某个组。
-
- ["userid-param", "self"]
-
The value provided for the API call’s userid parameter must refer to the user performing the action (usually in conjunction with or, to allow users to perform an action on themselves, even if they don’t have elevated privileges).
API 调用中 userid 参数提供的值必须指向执行该操作的用户(通常与 or 一起使用,以允许用户对自己执行操作,即使他们没有提升的权限)。 - ["userid-param", "Realm.AllocateUser"]
-
The user needs Realm.AllocateUser access to /access/realm/<realm>, with <realm> referring to the realm of the user passed via the userid parameter. Note that the user does not need to exist in order to be associated with a realm, since user IDs are passed in the form of <username>@<realm>.
用户需要对 /access/realm/<realm> 拥有 Realm.AllocateUser 访问权限,其中 <realm> 指通过 userid 参数传递的用户所属的领域。请注意,用户不必存在于系统中才能与某个领域关联,因为用户 ID 以 <username>@<realm> 的形式传递。 - ["perm-modify", <path>]
-
The path is a templated parameter (see Objects and Paths). The user needs either the Permissions.Modify privilege or, depending on the path, the following privileges as a possible substitute:
路径是一个模板参数(参见对象和路径)。用户需要拥有 Permissions.Modify 权限,或者根据路径,以下权限之一可作为替代:-
/storage/...: requires 'Datastore.Allocate`
/storage/...:需要 'Datastore.Allocate' 权限 -
/vms/...: requires 'VM.Allocate`
/vms/...:需要 'VM.Allocate' 权限 -
/pool/...: requires 'Pool.Allocate`
/ pool / ... :需要 'Pool.Allocate'If the path is empty, Permissions.Modify on /access is required.
如果路径为空,则需要对 /access 具有 Permissions.Modify 权限。If the user does not have the Permissions.Modify privilege, they can only delegate subsets of their own privileges on the given path (e.g., a user with PVEVMAdmin could assign PVEVMUser, but not PVEAdmin).
如果用户没有 Permissions.Modify 权限,则他们只能在给定路径上委派自己权限的子集(例如,具有 PVEVMAdmin 权限的用户可以分配 PVEVMUser,但不能分配 PVEAdmin)。
-
14.8. Command-line Tool 14.8. 命令行工具
Most users will simply use the GUI to manage users. But there is also
a fully featured command-line tool called pveum (short for “Proxmox
VE User Manager”). Please note that all Proxmox VE command-line
tools are wrappers around the API, so you can also access those
functions through the REST API.
大多数用户只会使用图形界面来管理用户。但也有一个功能齐全的命令行工具,名为 pveum(“Proxmox VE 用户管理器”的缩写)。请注意,所有 Proxmox VE 命令行工具都是基于 API 的封装,因此你也可以通过 REST API 访问这些功能。
Here are some simple usage examples. To show help, type:
以下是一些简单的使用示例。要显示帮助,请输入:
pveum
or (to show detailed help about a specific command)
或者(显示某个特定命令的详细帮助)
pveum help user addCreate a new user: 创建一个新用户:
pveum user add testuser@pve -comment "Just a test"Set or change the password (not all realms support this):
设置或更改密码(并非所有领域都支持此操作):
pveum passwd testuser@pve
Disable a user: 禁用用户:
pveum user modify testuser@pve -enable 0Create a new group: 创建新组:
pveum group add testgroup
Create a new role: 创建新角色:
pveum role add PVE_Power-only -privs "VM.PowerMgmt VM.Console"
14.9. Real World Examples
14.9. 真实案例
14.9.1. Administrator Group
14.9.1. 管理员组
It is possible that an administrator would want to create a group of users with
full administrator rights (without using the root account).
管理员可能希望创建一个拥有完全管理员权限的用户组(不使用 root 账户)。
To do this, first define the group:
为此,首先定义该组:
pveum group add admin -comment "System Administrators"Then assign the role: 然后分配角色:
pveum acl modify / -group admin -role AdministratorFinally, you can add users to the new admin group:
最后,您可以将用户添加到新的管理员组:
pveum user modify testuser@pve -group admin
14.9.2. Auditors 14.9.2. 审计员
You can give read only access to users by assigning the PVEAuditor
role to users or groups.
您可以通过将 PVEAuditor 角色分配给用户或组,赋予他们只读访问权限。
Example 1: Allow user joe@pve to see everything
示例 1:允许用户 joe@pve 查看所有内容
pveum acl modify / -user joe@pve -role PVEAuditorExample 2: Allow user joe@pve to see all virtual machines
示例 2:允许用户 joe@pve 查看所有虚拟机
pveum acl modify /vms -user joe@pve -role PVEAuditor
14.9.3. Delegate User Management
14.9.3. 委派用户管理
If you want to delegate user management to user joe@pve, you can do
that with:
如果您想将用户管理权限委派给用户 joe@pve,可以这样做:
pveum acl modify /access -user joe@pve -role PVEUserAdmin
User joe@pve can now add and remove users, and change other user attributes,
such as passwords. This is a very powerful role, and you most
likely want to limit it to selected realms and groups. The following
example allows joe@pve to modify users within the realm pve, if they
are members of group customers:
用户 joe@pve 现在可以添加和删除用户,并更改其他用户属性,如密码。这是一个非常强大的角色,您很可能希望将其限制在选定的领域和组中。以下示例允许 joe@pve 修改属于 pve 领域且是 customers 组成员的用户:
pveum acl modify /access/realm/pve -user joe@pve -role PVEUserAdmin pveum acl modify /access/groups/customers -user joe@pve -role PVEUserAdmin
|
|
The user is able to add other users, but only if they are
members of the group customers and within the realm pve. 该用户能够添加其他用户,但前提是这些用户属于 customers 组且在 pve 领域内。 |
14.9.4. Limited API Token for Monitoring
14.9.4. 用于监控的有限 API 代币
Permissions on API tokens are always a subset of those of their corresponding
user, meaning that an API token can’t be used to carry out a task that the
backing user has no permission to do. This section will demonstrate how you can
use an API token with separate privileges, to limit the token owner’s
permissions further.
API 代币的权限始终是其对应用户权限的子集,这意味着 API 代币不能用于执行其背后用户无权限执行的任务。本节将演示如何使用具有独立权限的 API 代币,进一步限制代币所有者的权限。
Give the user joe@pve the role PVEVMAdmin on all VMs:
给用户 joe@pve 在所有虚拟机上赋予 PVEVMAdmin 角色:
pveum acl modify /vms -user joe@pve -role PVEVMAdmin
Add a new API token with separate privileges, which is only allowed to view VM
information (for example, for monitoring purposes):
添加一个具有独立权限的新 API 代币,该代币仅允许查看虚拟机信息(例如,用于监控目的):
pveum user token add joe@pve monitoring -privsep 1 pveum acl modify /vms -token 'joe@pve!monitoring' -role PVEAuditor
Verify the permissions of the user and token:
验证用户和代币的权限:
pveum user permissions joe@pve pveum user token permissions joe@pve monitoring
14.9.5. Resource Pools 14.9.5. 资源池
An enterprise is usually structured into several smaller departments, and it is
common that you want to assign resources and delegate management tasks to each
of these. Let’s assume that you want to set up a pool for a software development
department. First, create a group:
一个企业通常被划分为几个较小的部门,通常你会希望为每个部门分配资源并委派管理任务。假设你想为软件开发部门设置一个资源池。首先,创建一个组:
pveum group add developers -comment "Our software developers"Now we create a new user which is a member of that group:
现在我们创建一个属于该组的新用户:
pveum user add developer1@pve -group developers -password
|
|
The "-password" parameter will prompt you for a password “-password”参数会提示你输入密码 |
Then we create a resource pool for our development department to use:
然后我们为开发部门创建一个资源池:
pveum pool add dev-pool --comment "IT development pool"Finally, we can assign permissions to that pool:
最后,我们可以将权限分配给该资源池:
pveum acl modify /pool/dev-pool/ -group developers -role PVEAdminOur software developers can now administer the resources assigned to
that pool.
我们的软件开发人员现在可以管理分配给该资源池的资源。
15. High Availability
15. 高可用性
Our modern society depends heavily on information provided by
computers over the network. Mobile devices amplified that dependency,
because people can access the network any time from anywhere. If you
provide such services, it is very important that they are available
most of the time.
我们现代社会在很大程度上依赖于通过网络提供的计算机信息。移动设备加剧了这种依赖,因为人们可以随时随地访问网络。如果您提供此类服务,确保它们大部分时间可用非常重要。
We can mathematically define the availability as the ratio of (A), the
total time a service is capable of being used during a given interval
to (B), the length of the interval. It is normally expressed as a
percentage of uptime in a given year.
我们可以用数学方式定义可用性为(A)服务在给定时间间隔内可用的总时间与(B)该时间间隔长度的比率。它通常以一年内正常运行时间的百分比表示。
| Availability % 可用性百分比 | Downtime per year 每年停机时间 |
|---|---|
99 |
3.65 days 3.65 天 |
99.9 |
8.76 hours 8.76 小时 |
99.99 |
52.56 minutes 52.56 分钟 |
99.999 |
5.26 minutes 5.26 分钟 |
99.9999 |
31.5 seconds 31.5 秒 |
99.99999 |
3.15 seconds 3.15 秒 |
There are several ways to increase availability. The most elegant
solution is to rewrite your software, so that you can run it on
several hosts at the same time. The software itself needs to have a way
to detect errors and do failover. If you only want to serve read-only
web pages, then this is relatively simple. However, this is generally complex
and sometimes impossible, because you cannot modify the software yourself. The
following solutions works without modifying the software:
有几种方法可以提高可用性。最优雅的解决方案是重写您的软件,使其能够同时在多个主机上运行。软件本身需要具备检测错误和故障切换的能力。如果您只想提供只读网页,那么这相对简单。然而,这通常很复杂,有时甚至不可能,因为您无法自行修改软件。以下解决方案无需修改软件即可实现:
-
Use reliable “server” components
使用可靠的“服务器”组件Computer components with the same functionality can have varying reliability numbers, depending on the component quality. Most vendors sell components with higher reliability as “server” components - usually at higher price.
具有相同功能的计算机组件,其可靠性数值可能不同,这取决于组件的质量。大多数供应商将可靠性较高的组件作为“服务器”组件出售——通常价格更高。 -
Eliminate single point of failure (redundant components)
消除单点故障(冗余组件)-
use an uninterruptible power supply (UPS)
使用不间断电源(UPS) -
use redundant power supplies in your servers
在服务器中使用冗余电源供应器 -
use ECC-RAM 使用 ECC 内存
-
use redundant network hardware
使用冗余网络硬件 -
use RAID for local storage
对本地存储使用 RAID -
use distributed, redundant storage for VM data
对虚拟机数据使用分布式冗余存储
-
-
Reduce downtime 减少停机时间
-
rapidly accessible administrators (24/7)
快速可访问的管理员(全天候 24/7) -
availability of spare parts (other nodes in a Proxmox VE cluster)
备件的可用性(Proxmox VE 集群中的其他节点) -
automatic error detection (provided by ha-manager)
自动错误检测(由 ha-manager 提供) -
automatic failover (provided by ha-manager)
自动故障转移(由 ha-manager 提供)
-
Virtualization environments like Proxmox VE make it much easier to reach
high availability because they remove the “hardware” dependency. They
also support the setup and use of redundant storage and network
devices, so if one host fails, you can simply start those services on
another host within your cluster.
像 Proxmox VE 这样的虚拟化环境使实现高可用性变得更加容易,因为它们消除了“硬件”依赖。它们还支持冗余存储和网络设备的设置和使用,因此如果一个主机发生故障,您可以简单地在集群中的另一台主机上启动这些服务。
Better still, Proxmox VE provides a software stack called ha-manager,
which can do that automatically for you. It is able to automatically
detect errors and do automatic failover.
更好的是,Proxmox VE 提供了一个名为 ha-manager 的软件堆栈,可以为您自动完成这些操作。它能够自动检测错误并执行自动故障转移。
Proxmox VE ha-manager works like an “automated” administrator. First, you
configure what resources (VMs, containers, …) it should
manage. Then, ha-manager observes the correct functionality, and handles
service failover to another node in case of errors. ha-manager can
also handle normal user requests which may start, stop, relocate and
migrate a service.
Proxmox VE 的 ha-manager 就像一个“自动化”的管理员。首先,您配置它应管理的资源(虚拟机、容器等)。然后,ha-manager 监控其正常功能,并在发生错误时将服务故障转移到另一节点。ha-manager 还可以处理正常的用户请求,如启动、停止、重新定位和迁移服务。
But high availability comes at a price. High quality components are
more expensive, and making them redundant doubles the costs at
least. Additional spare parts increase costs further. So you should
carefully calculate the benefits, and compare with those additional
costs.
但高可用性是有代价的。高质量的组件更昂贵,使其冗余至少会使成本翻倍。额外的备件会进一步增加成本。因此,您应仔细计算收益,并与这些额外成本进行比较。
|
|
Increasing availability from 99% to 99.9% is relatively
simple. But increasing availability from 99.9999% to 99.99999% is very
hard and costly. ha-manager has typical error detection and failover
times of about 2 minutes, so you can get no more than 99.999%
availability. 将可用性从 99%提高到 99.9%相对简单。但将可用性从 99.9999%提高到 99.99999%则非常困难且成本高昂。ha-manager 的典型错误检测和故障切换时间约为 2 分钟,因此您最多只能获得 99.999%的可用性。 |
15.1. Requirements 15.1. 要求
You must meet the following requirements before you start with HA:
在开始使用高可用性之前,您必须满足以下要求:
-
at least three cluster nodes (to get reliable quorum)
至少三个集群节点(以获得可靠的仲裁) -
shared storage for VMs and containers
用于虚拟机和容器的共享存储 -
hardware redundancy (everywhere)
硬件冗余(全方位) -
use reliable “server” components
使用可靠的“服务器”组件 -
hardware watchdog - if not available we fall back to the linux kernel software watchdog (softdog)
硬件看门狗——如果不可用,我们将回退到 Linux 内核软件看门狗(softdog) -
optional hardware fencing devices
可选的硬件围栏设备
15.2. Resources 15.2. 资源
We call the primary management unit handled by ha-manager a
resource. A resource (also called “service”) is uniquely
identified by a service ID (SID), which consists of the resource type
and a type specific ID, for example vm:100. That example would be a
resource of type vm (virtual machine) with the ID 100.
我们将由 ha-manager 处理的主要管理单元称为资源。资源(也称为“服务”)通过服务 ID(SID)唯一标识,SID 由资源类型和特定类型的 ID 组成,例如 vm:100。该示例表示类型为 vm(虚拟机)的资源,ID 为 100。
For now we have two important resources types - virtual machines and
containers. One basic idea here is that we can bundle related software
into such a VM or container, so there is no need to compose one big
service from other services, as was done with rgmanager. In
general, a HA managed resource should not depend on other resources.
目前我们有两种重要的资源类型——虚拟机和容器。这里的一个基本理念是,我们可以将相关的软件打包到这样的虚拟机或容器中,因此无需像使用 rgmanager 那样将一个大服务由其他服务组合而成。一般来说,高可用(HA)管理的资源不应依赖于其他资源。
15.3. Management Tasks 15.3. 管理任务
This section provides a short overview of common management tasks. The
first step is to enable HA for a resource. This is done by adding the
resource to the HA resource configuration. You can do this using the
GUI, or simply use the command-line tool, for example:
本节简要介绍常见的管理任务。第一步是为资源启用高可用(HA)。这可以通过将资源添加到 HA 资源配置中来完成。你可以使用图形界面,也可以简单地使用命令行工具,例如:
# ha-manager add vm:100
The HA stack now tries to start the resources and keep them
running. Please note that you can configure the “requested”
resources state. For example you may want the HA stack to stop the
resource:
HA 堆栈现在会尝试启动资源并保持其运行。请注意,你可以配置“请求的”资源状态。例如,你可能希望 HA 堆栈停止该资源:
# ha-manager set vm:100 --state stopped
and start it again later:
然后稍后再重新启动它:
# ha-manager set vm:100 --state started
You can also use the normal VM and container management commands. They
automatically forward the commands to the HA stack, so
您也可以使用常规的虚拟机和容器管理命令。它们会自动将命令转发到高可用性(HA)堆栈,因此
# qm start 100
simply sets the requested state to started. The same applies to qm
stop, which sets the requested state to stopped.
只需将请求的状态设置为已启动。同样适用于 qm stop,它将请求的状态设置为已停止。
|
|
The HA stack works fully asynchronous and needs to communicate
with other cluster members. Therefore, it takes some seconds until you see
the result of such actions. HA 堆栈完全异步工作,需要与其他集群成员通信。因此,您需要等待几秒钟才能看到这些操作的结果。 |
To view the current HA resource configuration use:
要查看当前的 HA 资源配置,请使用:
# ha-manager config
vm:100
state stopped
And you can view the actual HA manager and resource state with:
您还可以查看实际的 HA 管理器和资源状态:
# ha-manager status quorum OK master node1 (active, Wed Nov 23 11:07:23 2016) lrm elsa (active, Wed Nov 23 11:07:19 2016) service vm:100 (node1, started)
You can also initiate resource migration to other nodes:
您也可以启动资源迁移到其他节点:
# ha-manager migrate vm:100 node2
This uses online migration and tries to keep the VM running. Online
migration needs to transfer all used memory over the network, so it is
sometimes faster to stop the VM, then restart it on the new node. This can be
done using the relocate command:
这使用在线迁移并尝试保持虚拟机运行。在线迁移需要通过网络传输所有使用的内存,因此有时停止虚拟机然后在新节点上重启会更快。可以使用 relocate 命令来完成此操作:
# ha-manager relocate vm:100 node2
Finally, you can remove the resource from the HA configuration using
the following command:
最后,您可以使用以下命令从 HA 配置中移除资源:
# ha-manager remove vm:100
|
|
This does not start or stop the resource. 这不会启动或停止该资源。 |
But all HA related tasks can be done in the GUI, so there is no need to
use the command line at all.
但所有与 HA 相关的任务都可以在图形界面中完成,因此完全不需要使用命令行。
15.4. How It Works
15.4. 工作原理
This section provides a detailed description of the Proxmox VE HA manager
internals. It describes all involved daemons and how they work
together. To provide HA, two daemons run on each node:
本节详细介绍了 Proxmox VE 高可用性管理器的内部结构。它描述了所有相关的守护进程及其协作方式。为了提供高可用性,每个节点上运行两个守护进程:
- pve-ha-lrm
-
The local resource manager (LRM), which controls the services running on the local node. It reads the requested states for its services from the current manager status file and executes the respective commands.
本地资源管理器(LRM),负责控制本地节点上运行的服务。它从当前管理器状态文件中读取其服务的请求状态,并执行相应的命令。 - pve-ha-crm
-
The cluster resource manager (CRM), which makes the cluster-wide decisions. It sends commands to the LRM, processes the results, and moves resources to other nodes if something fails. The CRM also handles node fencing.
集群资源管理器(CRM),负责做出整个集群范围内的决策。它向本地资源管理器(LRM)发送命令,处理结果,并在出现故障时将资源迁移到其他节点。CRM 还负责节点隔离。
|
|
Locks in the LRM & CRM Locks are provided by our distributed configuration file system (pmxcfs).
They are used to guarantee that each LRM is active once and working. As an
LRM only executes actions when it holds its lock, we can mark a failed node
as fenced if we can acquire its lock. This then lets us recover any failed
HA services securely without any interference from the now unknown failed node.
This all gets supervised by the CRM which currently holds the manager master
lock.LRM 和 CRM 中的锁 锁由我们的分布式配置文件系统(pmxcfs)提供。它们用于保证每个 LRM 只激活一次并正常工作。由于 LRM 只有在持有其锁时才执行操作,我们可以在获取到某个节点的锁时将其标记为已隔离的故障节点。这样就能安全地恢复任何失败的高可用服务,而不会受到现在未知的故障节点的干扰。所有这些都由当前持有管理主锁的 CRM 进行监督。 |
15.4.1. Service States 15.4.1. 服务状态
The CRM uses a service state enumeration to record the current service
state. This state is displayed on the GUI and can be queried using
the ha-manager command-line tool:
CRM 使用服务状态枚举来记录当前的服务状态。该状态显示在图形用户界面上,并且可以使用 ha-manager 命令行工具查询:
# ha-manager status quorum OK master elsa (active, Mon Nov 21 07:23:29 2016) lrm elsa (active, Mon Nov 21 07:23:22 2016) service ct:100 (elsa, stopped) service ct:102 (elsa, started) service vm:501 (elsa, started)
Here is the list of possible states:
以下是可能的状态列表:
- stopped
-
Service is stopped (confirmed by LRM). If the LRM detects a stopped service is still running, it will stop it again.
服务已停止(由 LRM 确认)。如果 LRM 发现已停止的服务仍在运行,它将再次停止该服务。 - request_stop
-
Service should be stopped. The CRM waits for confirmation from the LRM.
服务应停止。CRM 正在等待来自 LRM 的确认。 - stopping
-
Pending stop request. But the CRM did not get the request so far.
停止请求待处理。但 CRM 目前尚未收到该请求。 - started 已启动
-
Service is active an LRM should start it ASAP if not already running. If the Service fails and is detected to be not running the LRM restarts it (see Start Failure Policy).
服务处于活动状态,如果尚未运行,LRM 应尽快启动它。如果服务失败且被检测到未运行,LRM 会重新启动它(参见启动失败策略)。 - starting 正在启动
-
Pending start request. But the CRM has not got any confirmation from the LRM that the service is running.
启动请求待处理。但 CRM 尚未收到 LRM 关于服务正在运行的任何确认。 - fence 隔离
-
Wait for node fencing as the service node is not inside the quorate cluster partition (see Fencing). As soon as node gets fenced successfully the service will be placed into the recovery state.
等待节点隔离,因为服务节点不在法定集群分区内(参见隔离)。一旦节点成功隔离,服务将被置于恢复状态。 - recovery 恢复
-
Wait for recovery of the service. The HA manager tries to find a new node where the service can run on. This search depends not only on the list of online and quorate nodes, but also if the service is a group member and how such a group is limited. As soon as a new available node is found, the service will be moved there and initially placed into stopped state. If it’s configured to run the new node will do so.
等待服务恢复。HA 管理器尝试找到一个新的节点来运行该服务。此搜索不仅取决于在线且法定的节点列表,还取决于服务是否为组成员以及该组的限制。一旦找到新的可用节点,服务将被迁移到该节点,并初始置于停止状态。如果配置为运行,新的节点将启动该服务。 - freeze 冻结
-
Do not touch the service state. We use this state while we reboot a node, or when we restart the LRM daemon (see Package Updates).
不要触碰服务状态。我们在重启节点或重启 LRM 守护进程时使用此状态(参见包更新)。 - ignored 忽略
-
Act as if the service were not managed by HA at all. Useful, when full control over the service is desired temporarily, without removing it from the HA configuration.
表现得好像该服务根本不受 HA 管理。当暂时需要对服务进行完全控制而不将其从 HA 配置中移除时,这非常有用。 - migrate 迁移
-
Migrate service (live) to other node.
将服务(在线)迁移到其他节点。 - error 错误
-
Service is disabled because of LRM errors. Needs manual intervention (see Error Recovery).
服务因 LRM 错误被禁用。需要手动干预(参见错误恢复)。 - queued 排队中
-
Service is newly added, and the CRM has not seen it so far.
服务是新添加的,CRM 目前尚未检测到它。 - disabled 已禁用
-
Service is stopped and marked as disabled
服务已停止并标记为禁用状态。
15.4.2. Local Resource Manager
15.4.2. 本地资源管理器
The local resource manager (pve-ha-lrm) is started as a daemon on
boot and waits until the HA cluster is quorate and thus cluster-wide
locks are working.
本地资源管理器(pve-ha-lrm)作为守护进程在启动时启动,并等待直到 HA 集群达到法定人数,从而集群范围内的锁定功能生效。
It can be in three states:
它可以处于三种状态:
-
wait for agent lock
等待代理锁 -
The LRM waits for our exclusive lock. This is also used as idle state if no service is configured.
LRM 等待我们的独占锁。如果没有配置任何服务,这也用作空闲状态。 - active 活动
-
The LRM holds its exclusive lock and has services configured.
LRM 持有其独占锁并配置了服务。 - lost agent lock 丢失代理锁
-
The LRM lost its lock, this means a failure happened and quorum was lost.
LRM 失去了锁,这意味着发生了故障并且失去了法定人数。
After the LRM gets in the active state it reads the manager status
file in /etc/pve/ha/manager_status and determines the commands it
has to execute for the services it owns.
For each command a worker gets started, these workers are running in
parallel and are limited to at most 4 by default. This default setting
may be changed through the datacenter configuration key max_worker.
When finished the worker process gets collected and its result saved for
the CRM.
当 LRM 进入活动状态时,它会读取 /etc/pve/ha/manager_status 中的管理器状态文件,并确定它需要为其拥有的服务执行的命令。对于每个命令,都会启动一个工作进程,这些工作进程并行运行,默认最多限制为 4 个。此默认设置可以通过数据中心配置键 max_worker 进行更改。完成后,工作进程会被回收,其结果会保存给 CRM。
|
|
Maximum Concurrent Worker Adjustment Tips The default value of at most 4 concurrent workers may be unsuited for
a specific setup. For example, 4 live migrations may occur at the same
time, which can lead to network congestions with slower networks and/or
big (memory wise) services. Also, ensure that in the worst case, congestion is
at a minimum, even if this means lowering the max_worker value. On the
contrary, if you have a particularly powerful, high-end setup you may also want
to increase it.最大并发工作进程调整提示 默认最多 4 个并发工作进程的值可能不适合特定的设置。例如,可能会同时发生 4 次实时迁移,这可能导致网络拥堵,尤其是在网络较慢和/或服务内存较大的情况下。此外,确保在最坏情况下,拥堵降到最低,即使这意味着降低 max_worker 的值。相反,如果您拥有特别强大、高端的配置,也可以考虑增加该值。 |
Each command requested by the CRM is uniquely identifiable by a UID. When
the worker finishes, its result will be processed and written in the LRM
status file /etc/pve/nodes/<nodename>/lrm_status. There the CRM may collect
it and let its state machine - respective to the commands output - act on it.
CRM 请求的每个命令都由一个唯一的 UID 标识。当工作进程完成后,其结果将被处理并写入 LRM 状态文件 /etc/pve/nodes/<nodename>/lrm_status。CRM 可以在此收集结果,并让其状态机根据命令输出对其进行处理。
The actions on each service between CRM and LRM are normally always synced.
This means that the CRM requests a state uniquely marked by a UID, the LRM
then executes this action one time and writes back the result, which is also
identifiable by the same UID. This is needed so that the LRM does not
execute an outdated command.
The only exceptions to this behaviour are the stop and error commands;
these two do not depend on the result produced and are executed
always in the case of the stopped state and once in the case of
the error state.
CRM 与 LRM 之间对每个服务的操作通常始终保持同步。这意味着 CRM 请求一个由 UID 唯一标记的状态,LRM 随后执行该操作一次并写回结果,该结果也由相同的 UID 标识。这是为了防止 LRM 执行过时的命令。唯一的例外是停止和错误命令;这两种命令不依赖于产生的结果,在停止状态时总是执行,在错误状态时执行一次。
|
|
Read the Logs 查看日志 The HA Stack logs every action it makes. This helps to understand what
and also why something happens in the cluster. Here its important to see
what both daemons, the LRM and the CRM, did. You may use
journalctl -u pve-ha-lrm on the node(s) where the service is and
the same command for the pve-ha-crm on the node which is the current master.HA 堆栈会记录其执行的每个操作。这有助于理解集群中发生了什么以及为什么发生。这里重要的是查看两个守护进程,LRM 和 CRM,所做的操作。你可以在服务所在的节点上使用 journalctl -u pve-ha-lrm 命令,在当前主节点上使用相同的命令查看 pve-ha-crm。 |
15.4.3. Cluster Resource Manager
15.4.3. 集群资源管理器
The cluster resource manager (pve-ha-crm) starts on each node and
waits there for the manager lock, which can only be held by one node
at a time. The node which successfully acquires the manager lock gets
promoted to the CRM master.
集群资源管理器(pve-ha-crm)在每个节点上启动,并在那里等待管理锁,该锁一次只能被一个节点持有。成功获取管理锁的节点将被提升为 CRM 主节点。
It can be in three states:
它可以处于三种状态:
-
wait for agent lock
等待代理锁 -
The CRM waits for our exclusive lock. This is also used as idle state if no service is configured
CRM 正在等待我们的独占锁。这也用作空闲状态,如果没有配置服务 - active 活动
-
The CRM holds its exclusive lock and has services configured
CRM 持有其独占锁并配置了服务 - lost agent lock 丢失代理锁
-
The CRM lost its lock, this means a failure happened and quorum was lost.
CRM 失去了锁,这意味着发生了故障并且失去了法定人数。
Its main task is to manage the services which are configured to be highly
available and try to always enforce the requested state. For example, a
service with the requested state started will be started if its not
already running. If it crashes it will be automatically started again.
Thus the CRM dictates the actions the LRM needs to execute.
它的主要任务是管理被配置为高可用的服务,并尽力始终强制执行请求的状态。例如,请求状态为已启动的服务,如果尚未运行,将会被启动。如果它崩溃了,将会自动重新启动。因此,CRM 决定了 LRM 需要执行的操作。
When a node leaves the cluster quorum, its state changes to unknown.
If the current CRM can then secure the failed node’s lock, the services
will be stolen and restarted on another node.
当一个节点离开集群法定人数时,其状态会变为未知。如果当前的 CRM 能够获取失败节点的锁,服务将被抢占并在另一个节点上重新启动。
When a cluster member determines that it is no longer in the cluster quorum, the
LRM waits for a new quorum to form. Until there is a cluster quorum, the node
cannot reset the watchdog. If there are active services on the node, or if the
LRM or CRM process is not scheduled or is killed, this will trigger a reboot
after the watchdog has timed out (this happens after 60 seconds).
当集群成员确定自己不再处于集群法定人数中时,LRM 会等待新的法定人数形成。在没有集群法定人数之前,节点无法重置看门狗。如果节点上有活动服务,或者 LRM 或 CRM 进程未被调度或被终止,看门狗超时后(60 秒后)将触发重启。
Note that if a node has an active CRM but the LRM is idle, a quorum loss will
not trigger a self-fence reset. The reason for this is that all state files and
configurations that the CRM accesses are backed up by the
clustered configuration file system, which becomes
read-only upon quorum loss. This means that the CRM only needs to protect itself
against its process being scheduled for too long, in which case another CRM
could take over unaware of the situation, causing corruption of the HA state.
The open watchdog ensures that this cannot happen.
请注意,如果一个节点有一个活动的 CRM 但 LRM 处于空闲状态,仲裁丢失不会触发自我隔离重置。原因是 CRM 访问的所有状态文件和配置都由集群配置文件系统备份,该文件系统在仲裁丢失时变为只读。这意味着 CRM 只需要防止其进程被调度时间过长,在这种情况下,另一个 CRM 可能会在不知情的情况下接管,导致 HA 状态损坏。开放的看门狗确保这种情况不会发生。
If no service is configured for more than 15 minutes, the CRM automatically
returns to the idle state and closes the watchdog completely.
如果超过 15 分钟没有配置任何服务,CRM 会自动返回空闲状态并完全关闭看门狗。
15.5. HA Simulator 15.5. HA 模拟器
By using the HA simulator you can test and learn all functionalities of the
Proxmox VE HA solutions.
通过使用 HA 模拟器,您可以测试和学习 Proxmox VE HA 解决方案的所有功能。
By default, the simulator allows you to watch and test the behaviour of a
real-world 3 node cluster with 6 VMs. You can also add or remove additional VMs
or Container.
默认情况下,模拟器允许您观察和测试一个由 3 个节点和 6 个虚拟机组成的真实集群的行为。您还可以添加或移除额外的虚拟机或容器。
You do not have to setup or configure a real cluster, the HA simulator runs out
of the box.
您无需设置或配置真实集群,HA 模拟器开箱即用。
Install with apt: 使用 apt 安装:
apt install pve-ha-simulator
You can even install the package on any Debian-based system without any
other Proxmox VE packages. For that you will need to download the package and
copy it to the system you want to run it on for installation. When you install
the package with apt from the local file system it will also resolve the
required dependencies for you.
您甚至可以在任何基于 Debian 的系统上安装该包,而无需其他 Proxmox VE 包。为此,您需要下载该包并将其复制到您想要运行安装的系统上。当您从本地文件系统使用 apt 安装该包时,它还会为您解决所需的依赖关系。
To start the simulator on a remote machine you must have an X11 redirection to
your current system.
要在远程机器上启动模拟器,您必须将 X11 重定向到您当前的系统。
If you are on a Linux machine you can use:
如果您使用的是 Linux 机器,可以使用:
ssh root@<IPofPVE> -Y
On Windows it works with mobaxterm.
在 Windows 上,可以使用 mobaxterm。
After connecting to an existing Proxmox VE with the simulator installed or
installing it on your local Debian-based system manually, you can try it out as
follows.
连接到已安装模拟器的现有 Proxmox VE,或在本地基于 Debian 的系统上手动安装后,您可以按如下方式尝试使用。
First you need to create a working directory where the simulator saves its
current state and writes its default config:
首先,您需要创建一个工作目录,模拟器将在此保存其当前状态并写入默认配置:
mkdir working
Then, simply pass the created directory as a parameter to pve-ha-simulator:
然后,只需将创建的目录作为参数传递给 pve-ha-simulator:
pve-ha-simulator working/
You can then start, stop, migrate the simulated HA services, or even check out
what happens on a node failure.
之后,您可以启动、停止、迁移模拟的 HA 服务,甚至查看节点故障时的情况。
15.6. Configuration 15.6. 配置
The HA stack is well integrated into the Proxmox VE API. So, for example,
HA can be configured via the ha-manager command-line interface, or
the Proxmox VE web interface - both interfaces provide an easy way to
manage HA. Automation tools can use the API directly.
HA 堆栈与 Proxmox VE API 集成良好。例如,HA 可以通过 ha-manager 命令行界面或 Proxmox VE 网页界面进行配置——这两种界面都提供了便捷的 HA 管理方式。自动化工具可以直接使用 API。
All HA configuration files are within /etc/pve/ha/, so they get
automatically distributed to the cluster nodes, and all nodes share
the same HA configuration.
所有 HA 配置文件都位于/etc/pve/ha/目录下,因此它们会自动分发到集群节点,所有节点共享相同的 HA 配置。
15.6.1. Resources 15.6.1. 资源
The resource configuration file /etc/pve/ha/resources.cfg stores
the list of resources managed by ha-manager. A resource configuration
inside that list looks like this:
资源配置文件/etc/pve/ha/resources.cfg 存储由 ha-manager 管理的资源列表。该列表中的资源配置如下所示:
<type>: <name>
<property> <value>
...
It starts with a resource type followed by a resource specific name,
separated with colon. Together this forms the HA resource ID, which is
used by all ha-manager commands to uniquely identify a resource
(example: vm:100 or ct:101). The next lines contain additional
properties:
它以资源类型开头,后跟资源特定名称,两者之间用冒号分隔。组合在一起形成 HA 资源 ID,所有 ha-manager 命令都使用该 ID 来唯一标识资源(例如:vm:100 或 ct:101)。接下来的几行包含附加属性:
- comment: <string>
-
Description. 描述。
- group: <string>
-
The HA group identifier.
HA 组标识符。 -
max_relocate: <integer> (0 - N) (default = 1)
max_relocate: <整数> (0 - N)(默认值 = 1) -
Maximal number of service relocate tries when a service failes to start.
服务启动失败时,最大迁移尝试次数。 -
max_restart: <integer> (0 - N) (default = 1)
max_restart: <整数> (0 - N)(默认值 = 1) -
Maximal number of tries to restart the service on a node after its start failed.
在节点上启动服务失败后,最大重启尝试次数。 -
state: <disabled | enabled | ignored | started | stopped> (default = started)
状态:<禁用 | 启用 | 忽略 | 已启动 | 已停止>(默认 = 已启动) -
Requested resource state. The CRM reads this state and acts accordingly. Please note that enabled is just an alias for started.
请求的资源状态。CRM 会读取此状态并相应地采取行动。请注意,启用只是已启动的别名。- started 已启动
-
The CRM tries to start the resource. Service state is set to started after successful start. On node failures, or when start fails, it tries to recover the resource. If everything fails, service state it set to error.
CRM 尝试启动资源。启动成功后,服务状态被设置为已启动。在节点故障或启动失败时,它会尝试恢复资源。如果所有操作都失败,服务状态将被设置为错误。 - stopped 已停止
-
The CRM tries to keep the resource in stopped state, but it still tries to relocate the resources on node failures.
CRM 尝试保持资源处于已停止状态,但在节点故障时仍会尝试重新定位资源。 - disabled 已禁用
-
The CRM tries to put the resource in stopped state, but does not try to relocate the resources on node failures. The main purpose of this state is error recovery, because it is the only way to move a resource out of the error state.
CRM 会尝试将资源置于停止状态,但不会在节点故障时尝试重新定位资源。此状态的主要目的是错误恢复,因为这是将资源从错误状态中移出的唯一方法。 - ignored 忽略
-
The resource gets removed from the manager status and so the CRM and the LRM do not touch the resource anymore. All {pve} API calls affecting this resource will be executed, directly bypassing the HA stack. CRM commands will be thrown away while there source is in this state. The resource will not get relocated on node failures.
资源将从管理器状态中移除,因此 CRM 和 LRM 不再操作该资源。所有影响该资源的 {pve} API 调用将直接执行,绕过 HA 栈。当资源处于此状态时,CRM 命令将被丢弃。资源在节点故障时不会被重新定位。
Here is a real world example with one VM and one container. As you see,
the syntax of those files is really simple, so it is even possible to
read or edit those files using your favorite editor:
这里是一个包含一台虚拟机和一个容器的实际示例。如你所见,这些文件的语法非常简单,因此甚至可以使用你喜欢的编辑器来读取或编辑这些文件:
配置示例(/etc/pve/ha/resources.cfg)
vm: 501
state started
max_relocate 2
ct: 102
# Note: use default settings for everything
# ha-manager add vm:501 --state started --max_relocate 2 # ha-manager add ct:102
15.6.2. Groups 15.6.2. 组
The HA group configuration file /etc/pve/ha/groups.cfg is used to
define groups of cluster nodes. A resource can be restricted to run
only on the members of such group. A group configuration look like
this:
HA 组配置文件 /etc/pve/ha/groups.cfg 用于定义集群节点的组。资源可以被限制只在该组的成员上运行。组配置如下所示:
group: <group>
nodes <node_list>
<property> <value>
...
- comment: <string> 注释:<string>
-
Description. 描述。
-
nodes: <node>[:<pri>]{,<node>[:<pri>]}*
节点:<node>[:<pri>]{,<node>[:<pri>]}* -
List of cluster node members, where a priority can be given to each node. A resource bound to a group will run on the available nodes with the highest priority. If there are more nodes in the highest priority class, the services will get distributed to those nodes. The priorities have a relative meaning only. The higher the number, the higher the priority.
集群节点成员列表,可以为每个节点指定优先级。绑定到某个组的资源将在具有最高优先级的可用节点上运行。如果最高优先级类别中有多个节点,服务将分布到这些节点上。优先级仅具有相对意义,数字越大,优先级越高。 -
nofailback: <boolean> (default = 0)
nofailback: <布尔值>(默认 = 0) -
The CRM tries to run services on the node with the highest priority. If a node with higher priority comes online, the CRM migrates the service to that node. Enabling nofailback prevents that behavior.
CRM 会尝试在优先级最高的节点上运行服务。如果一个优先级更高的节点上线,CRM 会将服务迁移到该节点。启用 nofailback 可以防止这种行为。 -
restricted: <boolean> (default = 0)
restricted: <布尔值>(默认 = 0) -
Resources bound to restricted groups may only run on nodes defined by the group. The resource will be placed in the stopped state if no group node member is online. Resources on unrestricted groups may run on any cluster node if all group members are offline, but they will migrate back as soon as a group member comes online. One can implement a preferred node behavior using an unrestricted group with only one member.
绑定到受限组的资源只能在该组定义的节点上运行。如果没有组内节点成员在线,资源将被置于停止状态。绑定到非受限组的资源如果所有组成员都离线,可以在任何集群节点上运行,但一旦有组成员上线,它们会迁移回去。可以通过仅包含一个成员的非受限组来实现首选节点行为。
A common requirement is that a resource should run on a specific
node. Usually the resource is able to run on other nodes, so you can define
an unrestricted group with a single member:
一个常见的需求是资源应运行在特定的节点上。通常资源能够在其他节点上运行,因此你可以定义一个只有单个成员的无限制组:
# ha-manager groupadd prefer_node1 --nodes node1
For bigger clusters, it makes sense to define a more detailed failover
behavior. For example, you may want to run a set of services on
node1 if possible. If node1 is not available, you want to run them
equally split on node2 and node3. If those nodes also fail, the
services should run on node4. To achieve this you could set the node
list to:
对于较大的集群,定义更详细的故障转移行为是有意义的。例如,你可能希望尽可能在 node1 上运行一组服务。如果 node1 不可用,你希望这些服务在 node2 和 node3 上平均分配运行。如果这些节点也失败,服务应运行在 node4 上。为实现此目的,你可以将节点列表设置为:
# ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"
Another use case is if a resource uses other resources only available
on specific nodes, lets say node1 and node2. We need to make sure
that HA manager does not use other nodes, so we need to create a
restricted group with said nodes:
另一个用例是,如果一个资源使用的其他资源仅在特定节点上可用,比如 node1 和 node2。我们需要确保 HA 管理器不使用其他节点,因此需要创建一个包含上述节点的受限组:
# ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted
The above commands created the following group configuration file:
上述命令创建了以下组配置文件:
配置示例(/etc/pve/ha/groups.cfg)
group: prefer_node1
nodes node1
group: mygroup1
nodes node2:1,node4,node1:2,node3:1
group: mygroup2
nodes node2,node1
restricted 1
The nofailback options is mostly useful to avoid unwanted resource
movements during administration tasks. For example, if you need to
migrate a service to a node which doesn’t have the highest priority in the
group, you need to tell the HA manager not to instantly move this service
back by setting the nofailback option.
nofailback 选项主要用于避免在管理任务期间资源的非预期移动。例如,如果您需要将服务迁移到组中优先级不是最高的节点,您需要通过设置 nofailback 选项告诉 HA 管理器不要立即将该服务移回。
Another scenario is when a service was fenced and it got recovered to
another node. The admin tries to repair the fenced node and brings it
up online again to investigate the cause of failure and check if it runs
stably again. Setting the nofailback flag prevents the recovered services from
moving straight back to the fenced node.
另一种情况是服务被隔离(fenced)后恢复到了另一个节点。管理员尝试修复被隔离的节点并重新上线,以调查故障原因并检查其是否能稳定运行。设置 nofailback 标志可以防止恢复的服务直接移回被隔离的节点。
15.7. Fencing 15.7. 隔离(Fencing)
On node failures, fencing ensures that the erroneous node is
guaranteed to be offline. This is required to make sure that no
resource runs twice when it gets recovered on another node. This is a
really important task, because without this, it would not be possible to
recover a resource on another node.
在节点故障时,围栏机制确保错误的节点被保证离线。这是为了确保在资源在另一节点恢复时,不会出现资源被同时运行两次的情况。这是一项非常重要的任务,因为没有它,就无法在另一节点上恢复资源。
If a node did not get fenced, it would be in an unknown state where
it may have still access to shared resources. This is really
dangerous! Imagine that every network but the storage one broke. Now,
while not reachable from the public network, the VM still runs and
writes to the shared storage.
如果节点没有被围栏,它将处于未知状态,可能仍然能够访问共享资源。这是非常危险的!想象一下,除了存储网络外,所有网络都断开了。此时,虽然无法从公共网络访问,但虚拟机仍在运行并写入共享存储。
If we then simply start up this VM on another node, we would get a
dangerous race condition, because we write from both nodes. Such
conditions can destroy all VM data and the whole VM could be rendered
unusable. The recovery could also fail if the storage protects against
multiple mounts.
如果我们随后在另一节点上简单地启动该虚拟机,就会出现危险的竞态条件,因为两个节点都会写入数据。这种情况可能会破坏所有虚拟机数据,使整个虚拟机无法使用。如果存储系统防止多重挂载,恢复也可能失败。
15.7.1. How Proxmox VE Fences
15.7.1. Proxmox VE 如何进行围栏
There are different methods to fence a node, for example, fence
devices which cut off the power from the node or disable their
communication completely. Those are often quite expensive and bring
additional critical components into a system, because if they fail you
cannot recover any service.
有多种方法可以对节点进行隔离,例如使用隔离设备切断节点的电源或完全禁用其通信。这些设备通常相当昂贵,并且会为系统带来额外的关键组件,因为如果它们发生故障,您将无法恢复任何服务。
We thus wanted to integrate a simpler fencing method, which does not
require additional external hardware. This can be done using
watchdog timers.
因此,我们希望集成一种更简单的隔离方法,不需要额外的外部硬件。这可以通过使用看门狗定时器来实现。
可能的隔离方法
-
external power switches 外部电源开关
-
isolate nodes by disabling complete network traffic on the switch
通过禁用交换机上的全部网络流量来隔离节点 -
self fencing using watchdog timers
使用看门狗定时器进行自我隔离
Watchdog timers have been widely used in critical and dependable systems
since the beginning of microcontrollers. They are often simple, independent
integrated circuits which are used to detect and recover from computer malfunctions.
看门狗定时器自微控制器诞生以来就被广泛应用于关键和可靠的系统中。它们通常是简单、独立的集成电路,用于检测和恢复计算机故障。
During normal operation, ha-manager regularly resets the watchdog
timer to prevent it from elapsing. If, due to a hardware fault or
program error, the computer fails to reset the watchdog, the timer
will elapse and trigger a reset of the whole server (reboot).
在正常运行期间,ha-manager 会定期重置看门狗定时器以防止其超时。如果由于硬件故障或程序错误,计算机未能重置看门狗,定时器将超时并触发整个服务器的重置(重启)。
Recent server motherboards often include such hardware watchdogs, but
these need to be configured. If no watchdog is available or
configured, we fall back to the Linux Kernel softdog. While still
reliable, it is not independent of the servers hardware, and thus has
a lower reliability than a hardware watchdog.
近年来的服务器主板通常包含此类硬件看门狗,但需要进行配置。如果没有可用或配置的看门狗,我们将回退到 Linux 内核的软狗。虽然软狗仍然可靠,但它不独立于服务器硬件,因此其可靠性低于硬件看门狗。
15.7.2. Configure Hardware Watchdog
15.7.2. 配置硬件看门狗
By default, all hardware watchdog modules are blocked for security
reasons. They are like a loaded gun if not correctly initialized. To
enable a hardware watchdog, you need to specify the module to load in
/etc/default/pve-ha-manager, for example:
默认情况下,出于安全原因,所有硬件看门狗模块都被阻止。它们如果未正确初始化,就像一把上膛的枪。要启用硬件看门狗,需要在 /etc/default/pve-ha-manager 中指定要加载的模块,例如:
# select watchdog module (default is softdog) WATCHDOG_MODULE=iTCO_wdt
This configuration is read by the watchdog-mux service, which loads
the specified module at startup.
该配置由 watchdog-mux 服务读取,该服务在启动时加载指定的模块。
15.7.3. Recover Fenced Services
15.7.3. 恢复被隔离的服务
After a node failed and its fencing was successful, the CRM tries to
move services from the failed node to nodes which are still online.
当节点发生故障且隔离成功后,CRM 会尝试将服务从故障节点迁移到仍在线的节点上。
The selection of nodes, on which those services gets recovered, is
influenced by the resource group settings, the list of currently active
nodes, and their respective active service count.
这些服务恢复到哪些节点,受资源组设置、当前活跃节点列表及其各自活跃服务数量的影响。
The CRM first builds a set out of the intersection between user selected
nodes (from group setting) and available nodes. It then choose the
subset of nodes with the highest priority, and finally select the node
with the lowest active service count. This minimizes the possibility
of an overloaded node.
CRM 首先构建一个集合,该集合是用户选择的节点(来自组设置)与可用节点的交集。然后选择优先级最高的节点子集,最后选择活跃服务数量最少的节点。这样可以最大限度地减少节点过载的可能性。
|
|
On node failure, the CRM distributes services to the
remaining nodes. This increases the service count on those nodes, and
can lead to high load, especially on small clusters. Please design
your cluster so that it can handle such worst case scenarios. 当节点发生故障时,CRM 会将服务分配到剩余的节点上。这会增加这些节点上的服务数量,尤其是在小型集群中,可能导致负载过高。请设计您的集群,使其能够应对这种最坏情况。 |
15.8. Start Failure Policy
15.8. 启动失败策略
The start failure policy comes into effect if a service failed to start on a
node one or more times. It can be used to configure how often a restart
should be triggered on the same node and how often a service should be
relocated, so that it has an attempt to be started on another node.
The aim of this policy is to circumvent temporary unavailability of shared
resources on a specific node. For example, if a shared storage isn’t available
on a quorate node anymore, for instance due to network problems, but is still
available on other nodes, the relocate policy allows the service to start
nonetheless.
启动失败策略在服务在某个节点上启动失败一次或多次时生效。它可用于配置在同一节点上触发重启的频率以及服务应被迁移的频率,从而尝试在另一节点上启动服务。该策略的目的是规避特定节点上共享资源的临时不可用。例如,如果共享存储因网络问题在有法定人数的节点上不再可用,但在其他节点上仍然可用,迁移策略允许服务仍然启动。
There are two service start recover policy settings which can be configured
specific for each resource.
有两种服务启动恢复策略设置,可以针对每个资源进行具体配置。
- max_restart
-
Maximum number of attempts to restart a failed service on the actual node. The default is set to one.
在实际节点上重启失败服务的最大尝试次数。默认值为一次。 - max_relocate
-
Maximum number of attempts to relocate the service to a different node. A relocate only happens after the max_restart value is exceeded on the actual node. The default is set to one.
将服务迁移到不同节点的最大尝试次数。只有在实际节点上的 max_restart 值被超过后,才会进行迁移。默认值为一次。
|
|
The relocate count state will only reset to zero when the
service had at least one successful start. That means if a service is
re-started without fixing the error only the restart policy gets
repeated. 重新定位计数状态只有在服务至少成功启动一次后才会重置为零。这意味着如果服务在未修复错误的情况下重新启动,只有重启策略会被重复执行。 |
15.9. Error Recovery 15.9. 错误恢复
If, after all attempts, the service state could not be recovered, it gets
placed in an error state. In this state, the service won’t get touched
by the HA stack anymore. The only way out is disabling a service:
如果经过所有尝试后,服务状态仍无法恢复,则服务会进入错误状态。在此状态下,HA 堆栈将不再对该服务进行操作。唯一的解决方法是禁用该服务:
# ha-manager set vm:100 --state disabled
This can also be done in the web interface.
这也可以在网页界面中完成。
To recover from the error state you should do the following:
要从错误状态中恢复,您应执行以下操作:
-
bring the resource back into a safe and consistent state (e.g.: kill its process if the service could not be stopped)
将资源恢复到安全且一致的状态(例如:如果服务无法停止,则终止其进程) -
disable the resource to remove the error flag
禁用该资源以清除错误标志 -
fix the error which led to this failures
修复导致此故障的错误 -
after you fixed all errors you may request that the service starts again
修复所有错误后,您可以请求服务重新启动
15.10. Package Updates 15.10. 软件包更新
When updating the ha-manager, you should do one node after the other, never
all at once for various reasons. First, while we test our software
thoroughly, a bug affecting your specific setup cannot totally be ruled out.
Updating one node after the other and checking the functionality of each node
after finishing the update helps to recover from eventual problems, while
updating all at once could result in a broken cluster and is generally not
good practice.
在更新 ha-manager 时,您应逐个节点进行,而不是一次性全部更新,原因有多方面。首先,虽然我们对软件进行了彻底测试,但仍无法完全排除影响您特定环境的漏洞。逐个节点更新并在完成更新后检查每个节点的功能,有助于在出现问题时进行恢复,而一次性全部更新可能导致集群崩溃,且通常不是良好的操作习惯。
Also, the Proxmox VE HA stack uses a request acknowledge protocol to perform
actions between the cluster and the local resource manager. For restarting,
the LRM makes a request to the CRM to freeze all its services. This prevents
them from getting touched by the Cluster during the short time the LRM is restarting.
After that, the LRM may safely close the watchdog during a restart.
Such a restart happens normally during a package update and, as already stated,
an active master CRM is needed to acknowledge the requests from the LRM. If
this is not the case the update process can take too long which, in the worst
case, may result in a reset triggered by the watchdog.
此外,Proxmox VE HA 堆栈使用请求确认协议在集群和本地资源管理器之间执行操作。对于重启,LRM 会向 CRM 发送请求以冻结其所有服务。这可以防止在 LRM 重启的短时间内集群对其进行操作。之后,LRM 可以在重启期间安全地关闭看门狗。这样的重启通常发生在包更新期间,正如前面所述,需要一个活动的主 CRM 来确认来自 LRM 的请求。如果情况不是这样,更新过程可能会耗时过长,最坏情况下可能导致看门狗触发重置。
15.11. Node Maintenance 15.11. 节点维护
Sometimes it is necessary to perform maintenance on a node, such as replacing
hardware or simply installing a new kernel image. This also applies while the
HA stack is in use.
有时需要对节点进行维护,例如更换硬件或简单地安装新的内核镜像。这在使用 HA 堆栈时同样适用。
The HA stack can support you mainly in two types of maintenance:
HA 堆栈主要可以支持您进行两种类型的维护:
-
for general shutdowns or reboots, the behavior can be configured, see Shutdown Policy.
对于一般的关机或重启,可以配置其行为,详见关机策略。 -
for maintenance that does not require a shutdown or reboot, or that should not be switched off automatically after only one reboot, you can enable the manual maintenance mode.
对于不需要关机或重启的维护,或者不应在仅一次重启后自动关闭的维护,可以启用手动维护模式。
15.11.1. Maintenance Mode
15.11.1. 维护模式
You can use the manual maintenance mode to mark the node as unavailable for HA
operation, prompting all services managed by HA to migrate to other nodes.
您可以使用手动维护模式将节点标记为不可用于 HA 操作,促使所有由 HA 管理的服务迁移到其他节点。
The target nodes for these migrations are selected from the other currently
available nodes, and determined by the HA group configuration and the configured
cluster resource scheduler (CRS) mode.
During each migration, the original node will be recorded in the HA managers'
state, so that the service can be moved back again automatically once the
maintenance mode is disabled and the node is back online.
这些迁移的目标节点是从其他当前可用的节点中选择的,并由 HA 组配置和配置的集群资源调度器(CRS)模式决定。在每次迁移过程中,原始节点将被记录在 HA 管理器的状态中,以便在维护模式被禁用且节点重新上线后,服务可以自动迁回。
Currently you can enabled or disable the maintenance mode using the ha-manager
CLI tool.
目前,您可以使用 ha-manager 命令行工具启用或禁用维护模式。
为节点启用维护模式
# ha-manager crm-command node-maintenance enable NODENAME
This will queue a CRM command, when the manager processes this command it will
record the request for maintenance-mode in the manager status. This allows you
to submit the command on any node, not just on the one you want to place in, or
out of the maintenance mode.
这将排队一个 CRM 命令,当管理器处理该命令时,它会在管理器状态中记录维护模式的请求。这允许您在任何节点上提交该命令,而不仅限于您想要进入或退出维护模式的节点。
Once the LRM on the respective node picks the command up it will mark itself as
unavailable, but still process all migration commands. This means that the LRM
self-fencing watchdog will stay active until all active services got moved, and
all running workers finished.
一旦相应节点上的 LRM 接收到命令,它将标记自身为不可用,但仍会处理所有迁移命令。这意味着 LRM 自我隔离看门狗将保持激活状态,直到所有活动服务被迁移,且所有运行中的工作进程完成。
Note that the LRM status will read maintenance mode as soon as the LRM
picked the requested state up, not only when all services got moved away, this
user experience is planned to be improved in the future.
For now, you can check for any active HA service left on the node, or watching
out for a log line like: pve-ha-lrm[PID]: watchdog closed (disabled) to know
when the node finished its transition into the maintenance mode.
请注意,LRM 状态一旦接收到请求的状态,就会显示为维护模式,而不仅仅是在所有服务迁移完成后才显示,这种用户体验未来计划改进。目前,您可以检查节点上是否还有任何活动的 HA 服务,或者关注类似以下日志行:pve-ha-lrm[PID]: watchdog closed (disabled),以了解节点何时完成进入维护模式的过渡。
|
|
The manual maintenance mode is not automatically deleted on node reboot,
but only if it is either manually deactivated using the ha-manager CLI or if
the manager-status is manually cleared. 手动维护模式在节点重启时不会自动删除,只有在使用 ha-manager CLI 手动停用或手动清除 manager-status 时才会删除。 |
为节点禁用维护模式
# ha-manager crm-command node-maintenance disable NODENAME
The process of disabling the manual maintenance mode is similar to enabling it.
Using the ha-manager CLI command shown above will queue a CRM command that,
once processed, marks the respective LRM node as available again.
禁用手动维护模式的过程与启用它类似。使用上面显示的 ha-manager CLI 命令将排队一个 CRM 命令,该命令处理后会将相应的 LRM 节点标记为可用。
If you deactivate the maintenance mode, all services that were on the node when
the maintenance mode was activated will be moved back.
如果您停用维护模式,所有在激活维护模式时位于该节点上的服务将被迁移回去。
15.11.2. Shutdown Policy
15.11.2. 关闭策略
Below you will find a description of the different HA policies for a node
shutdown. Currently Conditional is the default due to backward compatibility.
Some users may find that Migrate behaves more as expected.
下面是关于节点关闭的不同 HA 策略的描述。目前由于向后兼容性,默认策略为 Conditional。一些用户可能会发现 Migrate 的行为更符合预期。
The shutdown policy can be configured in the Web UI (Datacenter → Options
→ HA Settings), or directly in datacenter.cfg:
关机策略可以在网页界面中配置(数据中心 → 选项 → HA 设置),也可以直接在 datacenter.cfg 中配置:
ha: shutdown_policy=<value>
Migrate 迁移
Once the Local Resource manager (LRM) gets a shutdown request and this policy
is enabled, it will mark itself as unavailable for the current HA manager.
This triggers a migration of all HA Services currently located on this node.
The LRM will try to delay the shutdown process, until all running services get
moved away. But, this expects that the running services can be migrated to
another node. In other words, the service must not be locally bound, for example
by using hardware passthrough. As non-group member nodes are considered as
runnable target if no group member is available, this policy can still be used
when making use of HA groups with only some nodes selected. But, marking a group
as restricted tells the HA manager that the service cannot run outside of the
chosen set of nodes. If all of those nodes are unavailable, the shutdown will
hang until you manually intervene. Once the shut down node comes back online
again, the previously displaced services will be moved back, if they were not
already manually migrated in-between.
一旦本地资源管理器(LRM)收到关机请求且此策略被启用,它将标记自身为当前 HA 管理器不可用。这会触发将所有当前位于该节点上的 HA 服务迁移。LRM 会尝试延迟关机过程,直到所有运行中的服务被迁移走。但这要求运行中的服务能够迁移到其他节点。换句话说,服务不能是本地绑定的,例如使用硬件直通。由于非组成员节点在没有组成员可用时被视为可运行的目标,因此在使用仅选择部分节点的 HA 组时仍可使用此策略。但将组标记为受限会告诉 HA 管理器该服务不能在所选节点集之外运行。如果所有这些节点都不可用,关机将会挂起,直到你手动干预。一旦关机节点重新上线,之前迁移走的服务将被迁回,前提是它们在此期间没有被手动迁移。
|
|
The watchdog is still active during the migration process on shutdown.
If the node loses quorum it will be fenced and the services will be recovered. 在关机迁移过程中,watchdog 仍然处于激活状态。如果节点失去仲裁权,它将被隔离,服务将被恢复。 |
If you start a (previously stopped) service on a node which is currently being
maintained, the node needs to be fenced to ensure that the service can be moved
and started on another available node.
如果您在当前正在维护的节点上启动一个(之前已停止的)服务,则需要隔离该节点,以确保该服务可以迁移并在另一个可用节点上启动。
Failover 故障转移
This mode ensures that all services get stopped, but that they will also be
recovered, if the current node is not online soon. It can be useful when doing
maintenance on a cluster scale, where live-migrating VMs may not be possible if
too many nodes are powered off at a time, but you still want to ensure HA
services get recovered and started again as soon as possible.
此模式确保所有服务都会停止,但如果当前节点很快无法上线,它们也会被恢复。当进行集群级别的维护时,这种模式非常有用,因为如果同时关闭太多节点,可能无法实时迁移虚拟机,但您仍希望确保高可用性服务能够尽快恢复并重新启动。
Freeze 冻结
This mode ensures that all services get stopped and frozen, so that they won’t
get recovered until the current node is online again.
此模式确保所有服务都被停止并冻结,因此在当前节点重新上线之前,它们不会被恢复。
Conditional 条件
The Conditional shutdown policy automatically detects if a shutdown or a
reboot is requested, and changes behaviour accordingly.
条件关闭策略会自动检测是否请求了关闭或重启,并相应地改变行为。
A shutdown (poweroff) is usually done if it is planned for the node to stay
down for some time. The LRM stops all managed services in this case. This means
that other nodes will take over those services afterwards.
关机(断电)通常在计划节点停机一段时间时进行。此时,LRM 会停止所有受管理的服务。这意味着其他节点随后将接管这些服务。
|
|
Recent hardware has large amounts of memory (RAM). So we stop all
resources, then restart them to avoid online migration of all that RAM. If you
want to use online migration, you need to invoke that manually before you
shutdown the node. 现代硬件拥有大量内存(RAM)。因此,我们先停止所有资源,然后重新启动它们,以避免对所有这些内存进行在线迁移。如果您想使用在线迁移,需要在关机前手动调用该操作。 |
Node reboots are initiated with the reboot command. This is usually done
after installing a new kernel. Please note that this is different from
“shutdown”, because the node immediately starts again.
节点重启是通过 reboot 命令启动的。通常在安装新内核后执行此操作。请注意,这与“关机”不同,因为节点会立即重新启动。
The LRM tells the CRM that it wants to restart, and waits until the CRM puts
all resources into the freeze state (same mechanism is used for
Package Updates). This prevents those resources
from being moved to other nodes. Instead, the CRM starts the resources after the
reboot on the same node.
LRM 告诉 CRM 它想要重启,并等待 CRM 将所有资源置于冻结状态(相同机制也用于包更新)。这防止了这些资源被移动到其他节点。相反,CRM 会在重启后在同一节点上启动这些资源。
Manual Resource Movement
手动资源移动
Last but not least, you can also manually move resources to other nodes, before
you shutdown or restart a node. The advantage is that you have full control,
and you can decide if you want to use online migration or not.
最后但同样重要的是,您还可以在关闭或重启节点之前,手动将资源移动到其他节点。这样做的好处是您拥有完全的控制权,可以决定是否使用在线迁移。
|
|
Please do not kill services like pve-ha-crm, pve-ha-lrm or
watchdog-mux. They manage and use the watchdog, so this can result in an
immediate node reboot or even reset. 请不要终止 pve-ha-crm、pve-ha-lrm 或 watchdog-mux 等服务。它们管理并使用看门狗,因此这可能导致节点立即重启甚至复位。 |
15.12. Cluster Resource Scheduling
15.12. 集群资源调度
The cluster resource scheduler (CRS) mode controls how HA selects nodes for the
recovery of a service as well as for migrations that are triggered by a
shutdown policy. The default mode is basic, you can change it in the Web UI
(Datacenter → Options), or directly in datacenter.cfg:
集群资源调度器(CRS)模式控制 HA 如何选择节点来恢复服务,以及由关机策略触发的迁移。默认模式是 basic,您可以在 Web UI(数据中心 → 选项)中更改,或直接在 datacenter.cfg 中修改:
crs: ha=static
The change will be in effect starting with the next manager round (after a few
seconds).
更改将在下一次管理器轮询时生效(几秒钟后)。
For each service that needs to be recovered or migrated, the scheduler
iteratively chooses the best node among the nodes with the highest priority in
the service’s group.
对于每个需要恢复或迁移的服务,调度器会在该服务组中优先级最高的节点中迭代选择最佳节点。
|
|
There are plans to add modes for (static and dynamic) load-balancing in
the future. 未来计划添加(静态和动态)负载均衡模式。 |
15.12.1. Basic Scheduler
15.12.1. 基本调度器
The number of active HA services on each node is used to choose a recovery node.
Non-HA-managed services are currently not counted.
每个节点上活动的高可用服务数量用于选择恢复节点。目前不计算非高可用管理的服务。
15.12.2. Static-Load Scheduler
15.12.2. 静态负载调度器
|
|
The static mode is still a technology preview. 静态模式仍处于技术预览阶段。 |
Static usage information from HA services on each node is used to choose a
recovery node. Usage of non-HA-managed services is currently not considered.
使用来自每个节点上高可用(HA)服务的静态使用信息来选择恢复节点。目前不考虑非高可用管理服务的使用情况。
For this selection, each node in turn is considered as if the service was
already running on it, using CPU and memory usage from the associated guest
configuration. Then for each such alternative, CPU and memory usage of all nodes
are considered, with memory being weighted much more, because it’s a truly
limited resource. For both, CPU and memory, highest usage among nodes (weighted
more, as ideally no node should be overcommitted) and average usage of all nodes
(to still be able to distinguish in case there already is a more highly
committed node) are considered.
在此选择过程中,依次将每个节点视为服务已在其上运行,使用关联的虚拟机配置中的 CPU 和内存使用情况。然后,对于每个此类备选方案,考虑所有节点的 CPU 和内存使用情况,其中内存权重更大,因为内存是真正有限的资源。对于 CPU 和内存,既考虑节点中最高的使用率(权重更大,因为理想情况下不应有节点过度分配),也考虑所有节点的平均使用率(以便在已有更高负载节点的情况下仍能区分)。
|
|
The more services the more possible combinations there are, so it’s
currently not recommended to use it if you have thousands of HA managed
services. 服务越多,可能的组合也越多,因此如果您有成千上万个由 HA 管理的服务,目前不建议使用它。 |
15.12.3. CRS Scheduling Points
15.12.3. CRS 调度点
The CRS algorithm is not applied for every service in every round, since this
would mean a large number of constant migrations. Depending on the workload,
this could put more strain on the cluster than could be avoided by constant
balancing.
That’s why the Proxmox VE HA manager favors keeping services on their current node.
CRS 算法并不会在每一轮对每个服务都应用,因为这将导致大量的持续迁移。根据工作负载,这可能会给集群带来比持续平衡所能避免的更多的压力。这就是为什么 Proxmox VE HA 管理器更倾向于让服务保持在其当前节点上。
The CRS is currently used at the following scheduling points:
CRS 目前在以下调度点使用:
-
Service recovery (always active). When a node with active HA services fails, all its services need to be recovered to other nodes. The CRS algorithm will be used here to balance that recovery over the remaining nodes.
服务恢复(始终激活)。当一个运行有活动 HA 服务的节点发生故障时,所有其上的服务都需要恢复到其他节点。此处将使用 CRS 算法在剩余节点间平衡恢复过程。 -
HA group config changes (always active). If a node is removed from a group, or its priority is reduced, the HA stack will use the CRS algorithm to find a new target node for the HA services in that group, matching the adapted priority constraints.
HA 组配置变更(始终激活)。如果一个节点被从组中移除,或其优先级被降低,HA 堆栈将使用 CRS 算法为该组中的 HA 服务找到新的目标节点,以匹配调整后的优先级约束。 -
HA service stopped → start transition (opt-in). Requesting that a stopped service should be started is an good opportunity to check for the best suited node as per the CRS algorithm, as moving stopped services is cheaper to do than moving them started, especially if their disk volumes reside on shared storage. You can enable this by setting the ha-rebalance-on-start CRS option in the datacenter config. You can change that option also in the Web UI, under Datacenter → Options → Cluster Resource Scheduling.
HA 服务停止→启动转换(可选)。请求启动已停止的服务是检查 CRS 算法推荐的最佳节点的好机会,因为移动已停止的服务比移动已启动的服务成本更低,尤其当其磁盘卷位于共享存储上时。你可以通过在数据中心配置中设置 ha-rebalance-on-start CRS 选项来启用此功能。你也可以在 Web UI 中更改该选项,路径为数据中心 → 选项 → 集群资源调度。
16. Backup and Restore
16. 备份与恢复
Backups are a requirement for any sensible IT deployment, and Proxmox VE
provides a fully integrated solution, using the capabilities of each
storage and each guest system type. This allows the system
administrator to fine tune via the mode option between consistency
of the backups and downtime of the guest system.
备份是任何合理 IT 部署的必备条件,Proxmox VE 提供了一个完全集成的解决方案,利用每种存储和每种客户机系统类型的能力。这允许系统管理员通过模式选项在备份的一致性和客户机系统的停机时间之间进行微调。
Proxmox VE backups are always full backups - containing the VM/CT
configuration and all data. Backups can be started via the GUI or via
the vzdump command-line tool.
Proxmox VE 的备份始终是完整备份——包含虚拟机/容器配置和所有数据。备份可以通过图形界面或 vzdump 命令行工具启动。
Before a backup can run, a backup storage must be defined. Refer to the
storage documentation on how to add a storage. It can
either be a Proxmox Backup Server storage, where backups are stored as
de-duplicated chunks and metadata, or a file-level storage, where backups are
stored as regular files. Using Proxmox Backup Server on a dedicated host is
recommended, because of its advanced features. Using an NFS server is a good
alternative. In both cases, you might want to save those backups later to a tape
drive, for off-site archiving.
在备份运行之前,必须定义一个备份存储。有关如何添加存储,请参阅存储文档。备份存储可以是 Proxmox Backup Server 存储,备份以去重的数据块和元数据形式存储,或者是文件级存储,备份以常规文件形式存储。推荐在专用主机上使用 Proxmox Backup Server,因为它具有高级功能。使用 NFS 服务器是一个不错的替代方案。在这两种情况下,您可能希望稍后将这些备份保存到磁带驱动器,以便进行异地归档。
Backup jobs can be scheduled so that they are executed automatically on specific
days and times, for selectable nodes and guest systems. See the
Backup Jobs section for more.
备份任务可以安排在特定的日期和时间自动执行,适用于可选择的节点和客户系统。更多内容请参见备份任务章节。
16.1. Backup Modes 16.1. 备份模式
There are several ways to provide consistency (option mode),
depending on the guest type.
根据客户类型,有多种方式提供一致性(选项模式)。
虚拟机的备份模式:
- stop mode 停止模式
-
This mode provides the highest consistency of the backup, at the cost of a short downtime in the VM operation. It works by executing an orderly shutdown of the VM, and then runs a background QEMU process to backup the VM data. After the backup is started, the VM goes to full operation mode if it was previously running. Consistency is guaranteed by using the live backup feature.
此模式提供最高的一致性备份,但代价是虚拟机操作会有短暂的停机时间。它通过有序关闭虚拟机来工作,然后运行一个后台的 QEMU 进程来备份虚拟机数据。备份开始后,如果虚拟机之前处于运行状态,则会恢复到完全运行模式。通过使用实时备份功能保证一致性。 - suspend mode 挂起模式
-
This mode is provided for compatibility reason, and suspends the VM before calling the snapshot mode. Since suspending the VM results in a longer downtime and does not necessarily improve the data consistency, the use of the snapshot mode is recommended instead.
此模式出于兼容性原因提供,会在调用快照模式之前挂起虚拟机。由于挂起虚拟机会导致更长的停机时间,且不一定能提高数据一致性,因此建议使用快照模式。 - snapshot mode 快照模式
-
This mode provides the lowest operation downtime, at the cost of a small inconsistency risk. It works by performing a Proxmox VE live backup, in which data blocks are copied while the VM is running. If the guest agent is enabled (agent: 1) and running, it calls guest-fsfreeze-freeze and guest-fsfreeze-thaw to improve consistency.
此模式提供最低的操作停机时间,但存在小幅不一致的风险。它通过执行 Proxmox VE 的在线备份来工作,在备份过程中数据块在虚拟机运行时被复制。如果启用了并运行了客户机代理(agent: 1),则会调用 guest-fsfreeze-freeze 和 guest-fsfreeze-thaw 以提高一致性。On Windows guests it is necessary to configure the guest agent if another backup software is used within the guest. See Freeze & Thaw in the guest agent section for more details.
在 Windows 客户机上,如果在客户机内使用其他备份软件,则需要配置客户机代理。更多详情请参见客户机代理部分的冻结与解冻。
A technical overview of the Proxmox VE live backup for QemuServer can
be found online
here.
Proxmox VE QemuServer 实时备份的技术概述可在线查看。
|
|
Proxmox VE live backup provides snapshot-like semantics on any
storage type. It does not require that the underlying storage supports
snapshots. Also please note that since the backups are done via
a background QEMU process, a stopped VM will appear as running for a
short amount of time while the VM disks are being read by QEMU.
However the VM itself is not booted, only its disk(s) are read. Proxmox VE 实时备份在任何存储类型上都提供类似快照的语义。它不要求底层存储支持快照。还请注意,由于备份是通过后台的 QEMU 进程完成的,停止的虚拟机在其磁盘被 QEMU 读取时,会短暂显示为运行状态。但虚拟机本身并未启动,仅其磁盘被读取。 |
容器的备份模式:
- stop mode 停止模式
-
Stop the container for the duration of the backup. This potentially results in a very long downtime.
在备份期间停止容器。这可能导致非常长的停机时间。 - suspend mode 挂起模式
-
This mode uses rsync to copy the container data to a temporary location (see option --tmpdir). Then the container is suspended and a second rsync copies changed files. After that, the container is started (resumed) again. This results in minimal downtime, but needs additional space to hold the container copy.
此模式使用 rsync 将容器数据复制到临时位置(参见选项--tmpdir)。然后容器被挂起,第二次 rsync 复制已更改的文件。之后,容器再次启动(恢复)。这会导致最小的停机时间,但需要额外的空间来保存容器副本。When the container is on a local file system and the target storage of the backup is an NFS/CIFS server, you should set --tmpdir to reside on a local file system too, as this will result in a many fold performance improvement. Use of a local tmpdir is also required if you want to backup a local container using ACLs in suspend mode if the backup storage is an NFS server.
当容器位于本地文件系统上且备份的目标存储为 NFS/CIFS 服务器时,您应将--tmpdir 设置为也位于本地文件系统上,因为这将带来多倍的性能提升。如果您想在挂起模式下使用 ACL 备份本地容器,而备份存储是 NFS 服务器,则也必须使用本地 tmpdir。 - snapshot mode 快照模式
-
This mode uses the snapshotting facilities of the underlying storage. First, the container will be suspended to ensure data consistency. A temporary snapshot of the container’s volumes will be made and the snapshot content will be archived in a tar file. Finally, the temporary snapshot is deleted again.
此模式使用底层存储的快照功能。首先,容器将被暂停以确保数据一致性。然后会对容器的卷创建一个临时快照,并将快照内容归档为 tar 文件。最后,临时快照会被删除。
|
|
snapshot mode requires that all backed up volumes are on a storage that
supports snapshots. Using the backup=no mount point option individual volumes
can be excluded from the backup (and thus this requirement). 快照模式要求所有备份的卷都位于支持快照的存储上。通过使用 backup=no 挂载点选项,可以将单个卷排除在备份之外(从而免除此要求)。 |
|
|
By default additional mount points besides the Root Disk mount point are
not included in backups. For volume mount points you can set the Backup option
to include the mount point in the backup. Device and bind mounts are never
backed up as their content is managed outside the Proxmox VE storage library. 默认情况下,除了根磁盘挂载点外,其他挂载点不会包含在备份中。对于卷挂载点,可以设置 Backup 选项以将该挂载点包含在备份中。设备和绑定挂载点从不备份,因为它们的内容由 Proxmox VE 存储库之外的系统管理。 |
16.1.1. VM Backup Fleecing
16.1.1. 虚拟机备份缓存
When a backup for a VM is started, QEMU will install a "copy-before-write"
filter in its block layer. This filter ensures that upon new guest writes, old
data still needed for the backup is sent to the backup target first. The guest
write blocks until this operation is finished so guest IO to not-yet-backed-up
sectors will be limited by the speed of the backup target.
当虚拟机的备份开始时,QEMU 会在其块层安装一个“写前复制”过滤器。该过滤器确保在新的客户机写入时,仍需备份的旧数据会先发送到备份目标。客户机写入操作会阻塞,直到此操作完成,因此对尚未备份扇区的客户机 IO 速度将受限于备份目标的速度。
With backup fleecing, such old data is cached in a fleecing image rather than
sent directly to the backup target. This can help guest IO performance and even
prevent hangs in certain scenarios, at the cost of requiring more storage space.
通过备份缓存,这些旧数据会被缓存在缓存镜像中,而不是直接发送到备份目标。这可以提升客户机 IO 性能,甚至在某些情况下防止挂起,但代价是需要更多的存储空间。
To manually start a backup of VM 123 with fleecing images created on the
storage local-lvm, run
要手动启动虚拟机 123 的备份,并在存储 local-lvm 上创建缓存镜像,请运行
vzdump 123 --fleecing enabled=1,storage=local-lvm
As always, you can set the option for specific backup jobs, or as a node-wide
fallback via the configuration options. In the UI,
fleecing can be configured in the Advanced tab when editing a backup job.
和往常一样,您可以为特定的备份任务设置该选项,或者通过配置选项作为整个节点的回退设置。在用户界面中,编辑备份任务时可以在“高级”标签页中配置 fleecing。
The fleecing storage should be a fast local storage, with thin provisioning and
discard support. Examples are LVM-thin, RBD, ZFS with sparse 1 in the storage
configuration, many file-based storages. Ideally, the fleecing storage is a
dedicated storage, so it running full will not affect other guests and just fail
the backup. Parts of the fleecing image that have been backed up will be
discarded to try and keep the space usage low.
fleecing 存储应为快速的本地存储,支持精简配置和丢弃功能。示例包括 LVM-thin、RBD、存储配置中带有稀疏 1 的 ZFS,以及许多基于文件的存储。理想情况下,fleecing 存储是专用存储,这样当其满载时不会影响其他虚拟机,只会导致备份失败。已备份的 fleecing 镜像部分将被丢弃,以尽量保持空间使用的低水平。
For file-based storages that do not support discard (for example, NFS before
version 4.2), you should set preallocation off in the storage configuration.
In combination with qcow2 (used automatically as the format for the fleecing
image when the storage supports it), this has the advantage that already
allocated parts of the image can be re-used later, which can still help save
quite a bit of space.
对于不支持丢弃的基于文件的存储(例如,版本 4.2 之前的 NFS),应在存储配置中关闭预分配。结合 qcow2 格式(当存储支持时,fleecing 镜像会自动使用该格式),这样做的优点是已分配的镜像部分可以在以后重复使用,这仍然有助于节省相当多的空间。
|
|
On a storage that’s not thinly provisioned, for example, LVM or ZFS
without the sparse option, the full size of the original disk needs to be
reserved for the fleecing image up-front. On a thinly provisioned storage, the
fleecing image can grow to the same size as the original image only if the guest
re-writes a whole disk while the backup is busy with another disk. 在非精简配置的存储上,例如没有稀疏选项的 LVM 或 ZFS,原始磁盘的完整大小需要预先为快照镜像保留。在精简配置的存储上,快照镜像只有在客户机在备份忙于另一个磁盘时重写了整个磁盘,才会增长到与原始镜像相同的大小。 |
16.1.2. CT Change Detection Mode
16.1.2. 容器变更检测模式
Setting the change detection mode defines the encoding format for the pxar
archives and how changed, and unchanged files are handled for container backups
with Proxmox Backup Server as the target.
设置变更检测模式定义了 pxar 归档的编码格式,以及在以 Proxmox 备份服务器为目标的容器备份中,如何处理已更改和未更改的文件。
The change detection mode option can be configured for individual backup jobs in
the Advanced tab while editing a job. Further, this option can be set as
node-wide fallback via the configuration options.
变更检测模式选项可以在编辑备份任务时的高级标签页中为单个备份任务配置。此外,该选项还可以通过配置选项设置为节点范围的回退设置。
There are 3 change detection modes available:
有 3 种变更检测模式可用:
| Mode 模式 | Description 描述 |
|---|---|
Default 默认 |
Read and encode all files into a single archive, using the pxar
format version 1. |
Data 数据 |
Read and encode all files, but split data and metadata into
separate streams, using the pxar format version 2. |
Metadata 元数据 |
Split streams and use archive format version 2 like Data, but use
the metadata archive of the previous snapshot (if one exists) to detect
unchanged files, and reuse their data chunks without reading file contents from
disk, whenever possible. |
To perform a backup using the change detecation mode metadata you can run
要使用变更检测模式元数据执行备份,您可以运行
vzdump 123 --storage pbs-storage --pbs-change-detection-mode metadata
|
|
Backups of VMs or to storage backends other than Proxmox Backup Server are
not affected by this setting. 虚拟机的备份或备份到除 Proxmox Backup Server 以外的存储后端不受此设置影响。 |
16.2. Backup File Names
16.2. 备份文件名
Newer versions of vzdump encode the guest type and the
backup time into the filename, for example
较新的 vzdump 版本会将客户机类型和备份时间编码到文件名中,例如
vzdump-lxc-105-2009_10_09-11_04_43.tar
That way it is possible to store several backup in the same directory. You can
limit the number of backups that are kept with various retention options, see
the Backup Retention section below.
这样可以在同一目录中存储多个备份。您可以通过各种保留选项限制保留的备份数量,详见下文的备份保留部分。
16.3. Backup File Compression
16.3. 备份文件压缩
The backup file can be compressed with one of the following algorithms: lzo
[55], gzip [56] or zstd
[57].
备份文件可以使用以下算法之一进行压缩:lzo [55]、gzip [56] 或 zstd [57]。
Currently, Zstandard (zstd) is the fastest of these three algorithms.
Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
are more widely used and often installed by default.
目前,Zstandard(zstd)是这三种算法中最快的。多线程是 zstd 相较于 lzo 和 gzip 的另一个优势。Lzo 和 gzip 使用更广泛,且通常默认安装。
You can install pigz [58] as a drop-in replacement for gzip to provide better
performance due to multi-threading. For pigz & zstd, the amount of
threads/cores can be adjusted. See the
configuration options below.
您可以安装 pigz [58] 作为 gzip 的替代品,以利用多线程提供更好的性能。对于 pigz 和 zstd,可以调整线程/核心数量。请参见下面的配置选项。
The extension of the backup file name can usually be used to determine which
compression algorithm has been used to create the backup.
备份文件名的扩展名通常可以用来判断备份所使用的压缩算法。
.zst |
Zstandard (zstd) compression |
.gz or .tgz .gz 或 .tgz |
gzip compression gzip 压缩 |
.lzo |
lzo compression lzo 压缩 |
If the backup file name doesn’t end with one of the above file extensions, then
it was not compressed by vzdump.
如果备份文件名没有以上述文件扩展名之一结尾,则说明该备份文件不是由 vzdump 压缩的。
16.4. Backup Encryption 16.4. 备份加密
For Proxmox Backup Server storages, you can optionally set up client-side
encryption of backups, see the corresponding section.
对于 Proxmox Backup Server 存储,您可以选择设置备份的客户端加密,详见相应章节。
16.5. Backup Jobs 16.5. 备份任务
Besides triggering a backup manually, you can also setup periodic jobs that
backup all, or a selection of virtual guest to a storage. You can manage the
jobs in the UI under Datacenter → Backup or via the /cluster/backup API
endpoint. Both will generate job entries in /etc/pve/jobs.cfg, which are
parsed and executed by the pvescheduler daemon.
除了手动触发备份外,您还可以设置定期任务,将所有或部分虚拟客户机备份到存储中。您可以在界面中通过数据中心 → 备份管理这些任务,或者通过 /cluster/backup API 端点管理。两者都会在 /etc/pve/jobs.cfg 中生成任务条目,由 pvescheduler 守护进程解析并执行。
A job is either configured for all cluster nodes or a specific node, and is
executed according to a given schedule. The format for the schedule is very
similar to systemd calendar events, see the
calendar events section for details. The
Schedule field in the UI can be freely edited, and it contains several
examples that can be used as a starting point in its drop-down list.
任务可以配置为针对所有集群节点或特定节点,并根据给定的计划执行。计划的格式与 systemd 日历事件非常相似,详情请参见日历事件部分。界面中的计划字段可以自由编辑,其下拉列表中包含多个示例,可作为起点使用。
You can configure job-specific retention options
overriding those from the storage or node configuration, as well as a
template for notes for additional information to be saved
together with the backup.
您可以配置特定任务的保留选项,以覆盖存储或节点配置中的设置,还可以设置备注模板,用于保存与备份一起的附加信息。
Since scheduled backups miss their execution when the host was offline or the
pvescheduler was disabled during the scheduled time, it is possible to configure
the behaviour for catching up. By enabling the Repeat missed option (in the
Advanced tab in the UI, repeat-missed in the config), you can tell the
scheduler that it should run missed jobs as soon as possible.
由于计划备份在主机离线或 pvescheduler 在计划时间内被禁用时会错过执行,因此可以配置补偿行为。通过启用“重复错过的”选项(在 UI 的高级标签页中,配置中为 repeat-missed),您可以告诉调度器应尽快运行错过的任务。
There are a few settings for tuning backup performance (some of which are
exposed in the Advanced tab in the UI). The most notable is bwlimit for
limiting IO bandwidth. The amount of threads used for the compressor can be
controlled with the pigz (replacing gzip), respectively, zstd setting.
Furthermore, there are ionice (when the BFQ scheduler is used) and, as part of
the performance setting, max-workers (affects VM backups only) and
pbs-entries-max (affects container backups only). See the
configuration options for details.
有一些用于调整备份性能的设置(其中一些在 UI 的高级标签页中可见)。最显著的是用于限制 IO 带宽的 bwlimit。用于压缩器的线程数可以通过 pigz(替代 gzip)或 zstd 设置进行控制。此外,还有 ionice(当使用 BFQ 调度器时)以及作为性能设置一部分的 max-workers(仅影响虚拟机备份)和 pbs-entries-max(仅影响容器备份)。详情请参见配置选项。
16.6. Backup Retention 16.6. 备份保留
With the prune-backups option you can specify which backups you want to keep
in a flexible manner.
通过 prune-backups 选项,您可以灵活指定要保留的备份。
- keep-all <boolean>
-
Keep all backups. If this is true, no other options can be set.
保留所有备份。如果设置为 true,则不能设置其他选项。 - keep-last <N>
-
Keep the last <N> backups.
保留最近的 <N> 个备份。 - keep-hourly <N>
-
Keep backups for the last <N> hours. If there is more than one backup for a single hour, only the latest is kept.
保留最近 <N> 小时的备份。如果某一小时内有多个备份,则只保留最新的备份。 - keep-daily <N>
-
Keep backups for the last <N> days. If there is more than one backup for a single day, only the latest is kept.
保留最近<N>天的备份。如果某一天有多个备份,则只保留最新的备份。 - keep-weekly <N>
-
Keep backups for the last <N> weeks. If there is more than one backup for a single week, only the latest is kept.
保留最近<N>周的备份。如果某一周有多个备份,则只保留最新的备份。
|
|
Weeks start on Monday and end on Sunday. The software uses the
ISO week date-system and handles weeks at the end of the year correctly. 一周从星期一开始,到星期日结束。软件使用 ISO 周日期系统,并能正确处理年末的周数。 |
- keep-monthly <N>
-
Keep backups for the last <N> months. If there is more than one backup for a single month, only the latest is kept.
保留最近 <N> 个月的备份。如果某个月有多个备份,则只保留最新的备份。 - keep-yearly <N>
-
Keep backups for the last <N> years. If there is more than one backup for a single year, only the latest is kept.
保留最近 <N> 年的备份。如果某年有多个备份,则只保留最新的备份。
The retention options are processed in the order given above. Each option
only covers backups within its time period. The next option does not take care
of already covered backups. It will only consider older backups.
保留选项按上述顺序处理。每个选项仅涵盖其时间段内的备份。下一个选项不会处理已覆盖的备份,它只会考虑更早的备份。
Specify the retention options you want to use as a
comma-separated list, for example:
指定您想使用的保留选项,使用逗号分隔,例如:
# vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
While you can pass prune-backups directly to vzdump, it is often more
sensible to configure the setting on the storage level, which can be done via
the web interface.
虽然可以直接将 prune-backups 传递给 vzdump,但通常更合理的是在存储级别配置该设置,这可以通过网页界面完成。
|
|
The old maxfiles option is deprecated and should be replaced either by
keep-last or, in case maxfiles was 0 for unlimited retention, by
keep-all. 旧的 maxfiles 选项已被弃用,应替换为 keep-last,或者如果 maxfiles 为 0 表示无限保留,则应替换为 keep-all。 |
16.6.1. Prune Simulator 16.6.1. 修剪模拟器
You can use the prune simulator
of the Proxmox Backup Server documentation to explore the effect of different
retention options with various backup schedules.
您可以使用 Proxmox 备份服务器文档中的修剪模拟器,来探索不同保留选项在各种备份计划下的效果。
16.6.2. Retention Settings Example
16.6.2. 保留设置示例
The backup frequency and retention of old backups may depend on how often data
changes, and how important an older state may be, in a specific work load.
When backups act as a company’s document archive, there may also be legal
requirements for how long backups must be kept.
备份频率和旧备份的保留时间可能取决于数据变化的频率,以及在特定工作负载中较旧状态的重要性。当备份作为公司的文档档案时,可能还会有法律要求规定备份必须保存的时间。
For this example, we assume that you are doing daily backups, have a retention
period of 10 years, and the period between backups stored gradually grows.
在此示例中,我们假设您正在进行每日备份,保留期限为 10 年,并且存储的备份间隔逐渐增长。
keep-last=3 - even if only daily backups are taken, an admin may want to
create an extra one just before or after a big upgrade. Setting keep-last
ensures this.
keep-last=3 - 即使只进行每日备份,管理员可能也希望在重大升级前后额外创建一个备份。设置 keep-last 可以确保这一点。
keep-hourly is not set - for daily backups this is not relevant. You cover
extra manual backups already, with keep-last.
keep-hourly 未设置 - 对于每日备份来说,这不相关。您已经通过 keep-last 覆盖了额外的手动备份。
keep-daily=13 - together with keep-last, which covers at least one
day, this ensures that you have at least two weeks of backups.
keep-daily=13 - 结合 keep-last(至少覆盖一天),这确保您至少有两周的备份。
keep-weekly=8 - ensures that you have at least two full months of
weekly backups.
keep-weekly=8 - 确保您至少拥有两整个月的每周备份。
keep-monthly=11 - together with the previous keep settings, this
ensures that you have at least a year of monthly backups.
keep-monthly=11 - 结合之前的保留设置,这确保您至少拥有一年的每月备份。
keep-yearly=9 - this is for the long term archive. As you covered the
current year with the previous options, you would set this to nine for the
remaining ones, giving you a total of at least 10 years of coverage.
keep-yearly=9 - 这是用于长期归档。由于之前的选项已覆盖当前年份,您可以将此设置为九,以覆盖剩余年份,总共至少覆盖 10 年。
We recommend that you use a higher retention period than is minimally required
by your environment; you can always reduce it if you find it is unnecessarily
high, but you cannot recreate backups once they have been removed.
我们建议您使用比环境最低要求更长的保留期限;如果发现保留期限过长,可以随时缩短,但一旦备份被删除,就无法重新创建。
16.7. Backup Protection 16.7. 备份保护
You can mark a backup as protected to prevent its removal. Attempting to
remove a protected backup via Proxmox VE’s UI, CLI or API will fail. However, this
is enforced by Proxmox VE and not the file-system, that means that a manual removal
of a backup file itself is still possible for anyone with write access to the
underlying backup storage.
您可以将备份标记为受保护,以防止其被删除。通过 Proxmox VE 的用户界面、命令行界面或 API 尝试删除受保护的备份将失败。然而,这种保护是由 Proxmox VE 强制执行的,而不是文件系统强制执行的,这意味着任何拥有底层备份存储写权限的人仍然可以手动删除备份文件本身。
|
|
Protected backups are ignored by pruning and do not count towards the
retention settings. 受保护的备份会被修剪操作忽略,并且不计入保留设置。 |
For filesystem-based storages, the protection is implemented via a sentinel file
<backup-name>.protected. For Proxmox Backup Server, it is handled on the
server side (available since Proxmox Backup Server version 2.1).
对于基于文件系统的存储,保护通过哨兵文件 <backup-name>.protected 实现。对于 Proxmox Backup Server,则由服务器端处理(自 Proxmox Backup Server 2.1 版本起支持)。
Use the storage option max-protected-backups to control how many protected
backups per guest are allowed on the storage. Use -1 for unlimited. The
default is unlimited for users with Datastore.Allocate privilege and 5 for
other users.
使用存储选项 max-protected-backups 来控制每个客户机在存储上允许的受保护备份数量。使用 -1 表示无限制。默认情况下,拥有 Datastore.Allocate 权限的用户为无限制,其他用户为 5 个。
16.8. Backup Notes 16.8. 备份备注
You can add notes to backups using the Edit Notes button in the UI or via the
storage content API.
您可以通过 UI 中的“编辑备注”按钮或通过存储内容 API 为备份添加备注。
It is also possible to specify a template for generating notes dynamically for
a backup job and for manual backup. The template string can contain variables,
surrounded by two curly braces, which will be replaced by the corresponding
value when the backup is executed.
还可以为备份任务和手动备份指定一个模板,用于动态生成备注。模板字符串可以包含用双大括号括起来的变量,这些变量将在备份执行时被相应的值替换。
Currently supported are: 当前支持:
-
{{cluster}} the cluster name, if any
{{cluster}} 集群名称(如果有) -
{{guestname}} the virtual guest’s assigned name
{{guestname}} 虚拟客户机的分配名称 -
{{node}} the host name of the node the backup is being created
{{node}} 创建备份的节点主机名 -
{{vmid}} the numerical VMID of the guest
{{vmid}} 客户机的数字 VMID
When specified via API or CLI, it needs to be a single line, where newline and
backslash need to be escaped as literal \n and \\ respectively.
通过 API 或 CLI 指定时,需要为单行,其中换行符和反斜杠分别需要转义为字面量 \n 和 \\。
16.9. Restore 16.9. 恢复
A backup archive can be restored through the Proxmox VE web GUI or through the
following CLI tools:
备份归档可以通过 Proxmox VE 网页 GUI 或以下 CLI 工具进行恢复:
- pct restore
-
Container restore utility
容器恢复工具 - qmrestore
-
Virtual Machine restore utility
虚拟机恢复工具
For details see the corresponding manual pages.
详情请参见相应的手册页。
16.9.1. Bandwidth Limit 16.9.1. 带宽限制
Restoring one or more big backups may need a lot of resources, especially
storage bandwidth for both reading from the backup storage and writing to
the target storage. This can negatively affect other virtual guests as access
to storage can get congested.
恢复一个或多个大型备份可能需要大量资源,尤其是备份存储的读取和目标存储的写入带宽。这可能会对其他虚拟机产生负面影响,因为存储访问可能会变得拥堵。
To avoid this you can set bandwidth limits for a backup job. Proxmox VE
implements two kinds of limits for restoring and archive:
为避免这种情况,您可以为备份任务设置带宽限制。Proxmox VE 对恢复和归档实现了两种限制:
-
per-restore limit: denotes the maximal amount of bandwidth for reading from a backup archive
每次恢复限制:表示从备份存档读取的最大带宽量 -
per-storage write limit: denotes the maximal amount of bandwidth used for writing to a specific storage
每存储写入限制:表示用于写入特定存储的最大带宽量
The read limit indirectly affects the write limit, as we cannot write more
than we read. A smaller per-job limit will overwrite a bigger per-storage
limit. A bigger per-job limit will only overwrite the per-storage limit if
you have ‘Data.Allocate’ permissions on the affected storage.
读取限制间接影响写入限制,因为我们不能写入超过读取的量。较小的每任务限制将覆盖较大的每存储限制。较大的每任务限制只有在您对受影响的存储拥有“Data.Allocate”权限时,才会覆盖每存储限制。
You can use the ‘--bwlimit <integer>` option from the restore CLI commands
to set up a restore job specific bandwidth limit. KiB/s is used as unit
for the limit, this means passing `10240’ will limit the read speed of the
backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
is available for the already running virtual guests, and thus the backup
does not impact their operations.
您可以使用恢复命令行界面的‘--bwlimit <integer>’选项来设置恢复任务的特定带宽限制。限制的单位为 KiB/s,这意味着传入‘10240’将限制备份的读取速度为 10 MiB/s,确保剩余的存储带宽可用于已运行的虚拟机,从而使备份不会影响它们的运行。
|
|
You can use ‘0` for the bwlimit parameter to disable all limits for
a specific restore job. This can be helpful if you need to restore a very
important virtual guest as fast as possible. (Needs `Data.Allocate’
permissions on storage) 您可以使用 `0` 作为 bwlimit 参数来禁用特定恢复任务的所有限制。如果您需要尽快恢复一个非常重要的虚拟机,这将非常有用。(需要存储上的 `Data.Allocate` 权限) |
Most times your storage’s generally available bandwidth stays the same over
time, thus we implemented the possibility to set a default bandwidth limit
per configured storage, this can be done with:
大多数情况下,您的存储的可用带宽随时间基本保持不变,因此我们实现了为每个配置的存储设置默认带宽限制的功能,这可以通过以下方式完成:
# pvesm set STORAGEID --bwlimit restore=KIBs
16.9.2. Live-Restore 16.9.2. 实时恢复
Restoring a large backup can take a long time, in which a guest is still
unavailable. For VM backups stored on a Proxmox Backup Server, this wait
time can be mitigated using the live-restore option.
恢复大型备份可能需要很长时间,在此期间虚拟机仍然不可用。对于存储在 Proxmox Backup Server 上的虚拟机备份,可以使用实时恢复选项来减少等待时间。
Enabling live-restore via either the checkbox in the GUI or the --live-restore
argument of qmrestore causes the VM to start as soon as the restore
begins. Data is copied in the background, prioritizing chunks that the VM is
actively accessing.
通过在图形界面中勾选复选框或使用 qmrestore 的 --live-restore 参数启用实时恢复,会导致虚拟机在恢复开始时立即启动。数据会在后台复制,优先处理虚拟机正在访问的数据块。
Note that this comes with two caveats:
请注意,这有两个注意事项:
-
During live-restore, the VM will operate with limited disk read speeds, as data has to be loaded from the backup server (once loaded, it is immediately available on the destination storage however, so accessing data twice only incurs the penalty the first time). Write speeds are largely unaffected.
在实时恢复期间,虚拟机的磁盘读取速度会受到限制,因为数据必须从备份服务器加载(但一旦加载完成,数据会立即在目标存储上可用,因此访问数据两次只会在第一次访问时产生性能损失)。写入速度基本不受影响。 -
If the live-restore fails for any reason, the VM will be left in an undefined state - that is, not all data might have been copied from the backup, and it is most likely not possible to keep any data that was written during the failed restore operation.
如果实时恢复因任何原因失败,虚拟机将处于未定义状态——即并非所有数据都已从备份中复制,且很可能无法保留在失败恢复操作期间写入的任何数据。
This mode of operation is especially useful for large VMs, where only a small
amount of data is required for initial operation, e.g. web servers - once the OS
and necessary services have been started, the VM is operational, while the
background task continues copying seldom used data.
这种操作模式对于大型虚拟机尤其有用,因为初始运行只需少量数据,例如网页服务器——一旦操作系统和必要的服务启动,虚拟机即可运行,而后台任务则继续复制不常用的数据。
16.9.3. Single File Restore
16.9.3. 单文件恢复
The File Restore button in the Backups tab of the storage GUI can be used to
open a file browser directly on the data contained in a backup. This feature
is only available for backups on a Proxmox Backup Server.
存储 GUI 的备份标签中的“文件恢复”按钮可用于直接打开备份中包含的数据的文件浏览器。此功能仅适用于 Proxmox 备份服务器上的备份。
For containers, the first layer of the file tree shows all included pxar
archives, which can be opened and browsed freely. For VMs, the first layer shows
contained drive images, which can be opened to reveal a list of supported
storage technologies found on the drive. In the most basic case, this will be an
entry called part, representing a partition table, which contains entries for
each partition found on the drive. Note that for VMs, not all data might be
accessible (unsupported guest file systems, storage technologies, etc…).
对于容器,文件树的第一层显示所有包含的 pxar 归档文件,可以自由打开和浏览。对于虚拟机,第一层显示包含的驱动器映像,打开后会显示驱动器上支持的存储技术列表。在最基本的情况下,会有一个名为 part 的条目,代表分区表,其中包含驱动器上每个分区的条目。请注意,对于虚拟机,并非所有数据都能访问(不支持的客户机文件系统、存储技术等)。
Files and directories can be downloaded using the Download button, the latter
being compressed into a zip archive on the fly.
文件和目录可以通过下载按钮下载,后者会即时压缩成一个 zip 归档。
To enable secure access to VM images, which might contain untrusted data, a
temporary VM (not visible as a guest) is started. This does not mean that data
downloaded from such an archive is inherently safe, but it avoids exposing the
hypervisor system to danger. The VM will stop itself after a timeout. This
entire process happens transparently from a user’s point of view.
为了实现对可能包含不可信数据的虚拟机镜像的安全访问,会启动一个临时虚拟机(不可见为访客)。这并不意味着从此类归档下载的数据本质上是安全的,但它避免了将管理程序系统暴露于危险之中。该虚拟机会在超时后自动停止。整个过程对用户来说是透明的。
|
|
For troubleshooting purposes, each temporary VM instance generates a log
file in /var/log/proxmox-backup/file-restore/. The log file might contain
additional information in case an attempt to restore individual files or
accessing file systems contained in a backup archive fails. 出于故障排除的目的,每个临时虚拟机实例都会在 /var/log/proxmox-backup/file-restore/ 生成一个日志文件。如果尝试恢复单个文件或访问备份归档中包含的文件系统失败,日志文件可能包含额外的信息。 |
16.10. Configuration 16.10. 配置
Global configuration is stored in /etc/vzdump.conf. The file uses a
simple colon separated key/value format. Each line has the following
format:
全局配置存储在 /etc/vzdump.conf 文件中。该文件使用简单的冒号分隔的键/值格式。每行的格式如下:
OPTION: value
Blank lines in the file are ignored, and lines starting with a #
character are treated as comments and are also ignored. Values from
this file are used as default, and can be overwritten on the command
line.
文件中的空行会被忽略,以 # 字符开头的行被视为注释,也会被忽略。该文件中的值作为默认值使用,可以在命令行中覆盖。
We currently support the following options:
我们目前支持以下选项:
-
bwlimit: <integer> (0 - N) (default = 0)
bwlimit: <整数> (0 - N) (默认 = 0) -
Limit I/O bandwidth (in KiB/s).
限制 I/O 带宽(以 KiB/s 为单位)。 -
compress: <0 | 1 | gzip | lzo | zstd> (default = 0)
compress: <0 | 1 | gzip | lzo | zstd>(默认 = 0) -
Compress dump file. 压缩转储文件。
- dumpdir: <string> dumpdir: <字符串>
-
Store resulting files to specified directory.
将生成的文件存储到指定目录。 - exclude-path: <array> exclude-path: <数组>
-
Exclude certain files/directories (shell globs). Paths starting with / are anchored to the container’s root, other paths match relative to each subdirectory.
排除某些文件/目录(shell 通配符)。以 / 开头的路径锚定到容器根目录,其他路径相对于每个子目录匹配。 -
fleecing: [[enabled=]<1|0>] [,storage=<storage ID>]
fleecing: [[enabled=]<1|0>] [,storage=<存储 ID>] -
Options for backup fleecing (VM only).
备份缓存选项(仅限虚拟机)。-
enabled=<boolean> (default = 0)
enabled=<布尔值>(默认 = 0) -
Enable backup fleecing. Cache backup data from blocks where new guest writes happen on specified storage instead of copying them directly to the backup target. This can help guest IO performance and even prevent hangs, at the cost of requiring more storage space.
启用备份缓存。在指定存储上缓存新来宾写入的块的备份数据,而不是直接复制到备份目标。这可以提升来宾的 IO 性能,甚至防止挂起,但代价是需要更多的存储空间。 - storage=<storage ID> storage=<存储 ID>
-
Use this storage to storage fleecing images. For efficient space usage, it’s best to use a local storage that supports discard and either thin provisioning or sparse files.
使用此存储来存储镜像文件。为了高效利用空间,最好使用支持丢弃(discard)且支持精简配置(thin provisioning)或稀疏文件(sparse files)的本地存储。
-
enabled=<boolean> (default = 0)
-
ionice: <integer> (0 - 8) (default = 7)
ionice: <整数>(0 - 8)(默认 = 7) -
Set IO priority when using the BFQ scheduler. For snapshot and suspend mode backups of VMs, this only affects the compressor. A value of 8 means the idle priority is used, otherwise the best-effort priority is used with the specified value.
在使用 BFQ 调度器时设置 IO 优先级。对于虚拟机的快照和挂起模式备份,这只影响压缩器。值为 8 表示使用空闲优先级,否则使用指定值的尽力而为优先级。 -
lockwait: <integer> (0 - N) (default = 180)
lockwait: <整数>(0 - N)(默认 = 180) -
Maximal time to wait for the global lock (minutes).
等待全局锁的最长时间(分钟)。 -
mailnotification: <always | failure> (default = always)
邮件通知:<always | failure>(默认 = always) -
Deprecated: use notification targets/matchers instead. Specify when to send a notification mail
已弃用:请改用通知目标/匹配器。指定何时发送通知邮件 - mailto: <string> mailto: <字符串>
-
Deprecated: Use notification targets/matchers instead. Comma-separated list of email addresses or users that should receive email notifications.
已弃用:请改用通知目标/匹配器。以逗号分隔的电子邮件地址或用户列表,这些地址或用户将接收电子邮件通知。 -
maxfiles: <integer> (1 - N)
maxfiles: <整数>(1 - N) -
Deprecated: use prune-backups instead. Maximal number of backup files per guest system.
已弃用:请改用 prune-backups。每个客户系统的最大备份文件数。 -
mode: <snapshot | stop | suspend> (default = snapshot)
mode: <snapshot | stop | suspend>(默认 = snapshot) -
Backup mode. 备份模式。
-
notes-template: <string>
notes-template: <字符串> -
Template string for generating notes for the backup(s). It can contain variables which will be replaced by their values. Currently supported are {\{\cluster}}, {\{\guestname}}, {\{\node}}, and {\{\vmid}}, but more might be added in the future. Needs to be a single line, newline and backslash need to be escaped as \n and \\ respectively.
用于生成备份备注的模板字符串。它可以包含将被替换为其值的变量。目前支持 {\{\cluster}}、{\{\guestname}}、{\{\node}} 和 {\{\vmid}},未来可能会添加更多。必须为单行,换行符和反斜杠分别需要转义为 \n 和 \\。Requires option(s): storage
需要选项:storage -
notification-mode: <auto | legacy-sendmail | notification-system> (default = auto)
notification-mode: <auto | legacy-sendmail | notification-system>(默认 = auto) -
Determine which notification system to use. If set to legacy-sendmail, vzdump will consider the mailto/mailnotification parameters and send emails to the specified address(es) via the sendmail command. If set to notification-system, a notification will be sent via PVE’s notification system, and the mailto and mailnotification will be ignored. If set to auto (default setting), an email will be sent if mailto is set, and the notification system will be used if not.
确定使用哪种通知系统。如果设置为 legacy-sendmail,vzdump 将考虑 mailto/mailnotification 参数,并通过 sendmail 命令向指定的地址发送邮件。如果设置为 notification-system,将通过 PVE 的通知系统发送通知,mailto 和 mailnotification 参数将被忽略。如果设置为 auto(默认设置),当设置了 mailto 时会发送电子邮件,否则使用通知系统。 -
notification-policy: <always | failure | never> (default = always)
notification-policy: <always | failure | never>(默认 = always) -
Deprecated: Do not use
已弃用:请勿使用 - notification-target: <string>
-
Deprecated: Do not use
已弃用:请勿使用 - pbs-change-detection-mode: <data | legacy | metadata>
-
PBS mode used to detect file changes and switch encoding format for container backups.
用于检测文件更改并切换容器备份编码格式的 PBS 模式。 -
performance: [max-workers=<integer>] [,pbs-entries-max=<integer>]
performance: [max-workers=<整数>] [,pbs-entries-max=<整数>] -
Other performance-related settings.
其他与性能相关的设置。-
max-workers=<integer> (1 - 256) (default = 16)
max-workers=<整数>(1 - 256)(默认值 = 16) -
Applies to VMs. Allow up to this many IO workers at the same time.
适用于虚拟机。允许同时最多有这么多 IO 工作线程。 -
pbs-entries-max=<integer> (1 - N) (default = 1048576)
pbs-entries-max=<整数>(1 - N)(默认值 = 1048576) -
Applies to container backups sent to PBS. Limits the number of entries allowed in memory at a given time to avoid unintended OOM situations. Increase it to enable backups of containers with a large amount of files.
适用于发送到 PBS 的容器备份。限制内存中允许的条目数量,以避免意外的内存溢出(OOM)情况。增加该值以支持备份包含大量文件的容器。
-
max-workers=<integer> (1 - 256) (default = 16)
-
pigz: <integer> (default = 0)
pigz: <整数>(默认值 = 0) -
Use pigz instead of gzip when N>0. N=1 uses half of cores, N>1 uses N as thread count.
当 N>0 时使用 pigz 代替 gzip。N=1 时使用一半的核心数,N>1 时使用 N 作为线程数。 - pool: <string>
-
Backup all known guest systems included in the specified pool.
备份指定池中所有已知的客户系统。 - protected: <boolean>
-
If true, mark backup(s) as protected.
如果为真,则将备份标记为受保护。Requires option(s): storage
需要选项:storage -
prune-backups: [keep-all=<1|0>] [,keep-daily=<N>] [,keep-hourly=<N>] [,keep-last=<N>] [,keep-monthly=<N>] [,keep-weekly=<N>] [,keep-yearly=<N>] (default = keep-all=1)
prune-backups: [keep-all=<1|0>] [,keep-daily=<N>] [,keep-hourly=<N>] [,keep-last=<N>] [,keep-monthly=<N>] [,keep-weekly=<N>] [,keep-yearly=<N>](默认 = keep-all=1) -
Use these retention options instead of those from the storage configuration.
使用这些保留选项替代存储配置中的选项。- keep-all=<boolean> keep-all=<布尔值>
-
Keep all backups. Conflicts with the other options when true.
保留所有备份。设置为 true 时与其他选项冲突。 - keep-daily=<N>
-
Keep backups for the last <N> different days. If there is morethan one backup for a single day, only the latest one is kept.
保留最近 <N> 个不同日期的备份。如果某一天有多个备份,则只保留最新的一个。 - keep-hourly=<N>
-
Keep backups for the last <N> different hours. If there is morethan one backup for a single hour, only the latest one is kept.
保留最近 <N> 个不同小时的备份。如果某个小时有多个备份,则只保留最新的一个。 - keep-last=<N>
-
Keep the last <N> backups.
保留最近的 <N> 个备份。 - keep-monthly=<N>
-
Keep backups for the last <N> different months. If there is morethan one backup for a single month, only the latest one is kept.
保留最近 <N> 个不同月份的备份。如果某个月有多个备份,则只保留最新的一个。 - keep-weekly=<N>
-
Keep backups for the last <N> different weeks. If there is morethan one backup for a single week, only the latest one is kept.
保留最近 <N> 个不同周的备份。如果某周有多个备份,则只保留最新的一个。 - keep-yearly=<N>
-
Keep backups for the last <N> different years. If there is morethan one backup for a single year, only the latest one is kept.
保留最近 <N> 个不同年份的备份。如果某一年有多个备份,则只保留最新的一个。
-
remove: <boolean> (default = 1)
remove: <boolean>(默认值 = 1) -
Prune older backups according to prune-backups.
根据 prune-backups 修剪较旧的备份。 - script: <string>
-
Use specified hook script.
使用指定的钩子脚本。 -
stdexcludes: <boolean> (default = 1)
stdexcludes:<布尔值>(默认 = 1) -
Exclude temporary files and logs.
排除临时文件和日志。 -
stopwait: <integer> (0 - N) (default = 10)
stopwait:<整数>(0 - N)(默认 = 10) -
Maximal time to wait until a guest system is stopped (minutes).
等待客户机系统停止的最长时间(分钟)。 - storage: <storage ID>
-
Store resulting file to this storage.
将生成的文件存储到此存储中。 - tmpdir: <string>
-
Store temporary files to specified directory.
将临时文件存储到指定目录。 -
zstd: <integer> (default = 1)
zstd: <整数>(默认值 = 1) -
Zstd threads. N=0 uses half of the available cores, if N is set to a value bigger than 0, N is used as thread count.
Zstd 线程数。N=0 时使用可用核心数的一半,如果 N 设置为大于 0 的值,则 N 作为线程数使用。
vzdump.conf 配置示例
tmpdir: /mnt/fast_local_disk storage: my_backup_storage mode: snapshot bwlimit: 10000
16.11. Hook Scripts 16.11. 钩子脚本
You can specify a hook script with option --script. This script is
called at various phases of the backup process, with parameters
accordingly set. You can find an example in the documentation
directory (vzdump-hook-script.pl).
您可以使用选项 --script 指定一个钩子脚本。该脚本会在备份过程的各个阶段被调用,并相应地传入参数。您可以在文档目录中找到一个示例(vzdump-hook-script.pl)。
16.12. File Exclusions 16.12. 文件排除
|
|
this option is only available for container backups. 此选项仅适用于容器备份。 |
vzdump skips the following files by default (disable with the option
--stdexcludes 0)
vzdump 默认跳过以下文件(可通过选项 --stdexcludes 0 禁用)
/tmp/?* /var/tmp/?* /var/run/?*pid
You can also manually specify (additional) exclude paths, for example:
您也可以手动指定(额外的)排除路径,例如:
# vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
excludes the directory /tmp/ and any file or directory named /var/foo,
/var/foobar, and so on.
排除目录 /tmp/ 以及任何名为 /var/foo、/var/foobar 等的文件或目录。
|
|
For backups to Proxmox Backup Server (PBS) and suspend mode backups,
patterns with a trailing slash will match directories, but not files. On the
other hand, for non-PBS snapshot mode and stop mode backups, patterns with a
trailing slash currently do not match at all, because the tar command does not
support that. 对于备份到 Proxmox Backup Server (PBS) 和挂起模式备份,带有尾部斜杠的模式将匹配目录,但不匹配文件。另一方面,对于非 PBS 快照模式和停止模式备份,带有尾部斜杠的模式目前根本不匹配,因为 tar 命令不支持该功能。 |
Paths that do not start with a / are not anchored to the container’s root,
but will match relative to any subdirectory. For example:
不以 / 开头的路径不会锚定到容器的根目录,而是相对于任何子目录进行匹配。例如:
# vzdump 777 --exclude-path bar
excludes any file or directory named /bar, /var/bar, /var/foo/bar, and
so on, but not /bar2.
排除任何名为 /bar、/var/bar、/var/foo/bar 等的文件或目录,但不排除 /bar2。
Configuration files are also stored inside the backup archive
(in ./etc/vzdump/) and will be correctly restored.
配置文件也存储在备份归档内(位于 ./etc/vzdump/),并且会被正确恢复。
16.13. Examples 16.13. 示例
Simply dump guest 777 - no snapshot, just archive the guest private area and
configuration files to the default dump directory (usually
/var/lib/vz/dump/).
简单地转储客户机 777——不使用快照,只将客户机的私有区域和配置文件归档到默认的转储目录(通常是/var/lib/vz/dump/)。
# vzdump 777
Use rsync and suspend/resume to create a snapshot (minimal downtime).
使用 rsync 和挂起/恢复来创建快照(最小停机时间)。
# vzdump 777 --mode suspend
Backup all guest systems and send notification mails to root and admin.
Due to mailto being set and notification-mode being set to auto by
default, the notification mails are sent via the system’s sendmail
command instead of the notification system.
备份所有客户机系统并向 root 和 admin 发送通知邮件。由于 mailto 已设置且 notification-mode 默认设置为 auto,通知邮件通过系统的 sendmail 命令发送,而不是通过通知系统。
# vzdump --all --mode suspend --mailto root --mailto admin
Use snapshot mode (no downtime) and non-default dump directory.
使用快照模式(无停机时间)和非默认的转储目录。
# vzdump 777 --dumpdir /mnt/backup --mode snapshot
Backup more than one guest (selectively)
备份多个客户机(选择性)
# vzdump 101 102 103 --mailto root
Backup all guests excluding 101 and 102
备份除 101 和 102 之外的所有客户机
# vzdump --mode suspend --exclude 101,102
Restore a container to a new CT 600
将容器恢复到新的 CT 600
# pct restore 600 /mnt/backup/vzdump-lxc-777.tar
Restore a QemuServer VM to VM 601
将 QemuServer 虚拟机恢复到 VM 601
# qmrestore /mnt/backup/vzdump-qemu-888.vma 601
Clone an existing container 101 to a new container 300 with a 4GB root
file system, using pipes
使用管道将现有容器 101 克隆到新的容器 300,根文件系统大小为 4GB
# vzdump 101 --stdout | pct restore --rootfs 4 300 -
17. Notifications 17. 通知
17.1. Overview 17.1. 概述
-
Proxmox VE emits Notification Events in case of storage replication failures, node fencing, finished/failed backups and other events. These events are handled by the notification system. A notification event has metadata, for example a timestamp, a severity level, a type, and other optional metadata fields.
Proxmox VE 在存储复制失败、节点隔离、备份完成/失败及其他事件发生时,会发出通知事件。这些事件由通知系统处理。通知事件包含元数据,例如时间戳、严重级别、类型及其他可选的元数据字段。 -
Notification Matchers route a notification event to one or more notification targets. A matcher can have match rules to selectively route based on the metadata of a notification event.
通知匹配器将通知事件路由到一个或多个通知目标。匹配器可以有匹配规则,根据通知事件的元数据选择性地进行路由。 -
Notification Targets are a destination to which a notification event is routed to by a matcher. There are multiple types of target, mail-based (Sendmail and SMTP) and Gotify.
通知目标是通知事件由匹配器路由到的目的地。目标类型有多种,包括基于邮件的(Sendmail 和 SMTP)以及 Gotify。
Backup jobs have a configurable Notification Mode.
It allows you to choose between the notification system and a legacy mode
for sending notification emails. The legacy mode is equivalent to the
way notifications were handled before Proxmox VE 8.1.
备份任务具有可配置的通知模式。它允许您在通知系统和用于发送通知邮件的传统模式之间进行选择。传统模式相当于 Proxmox VE 8.1 之前处理通知的方式。
The notification system can be configured in the GUI under
Datacenter → Notifications. The configuration is stored in
/etc/pve/notifications.cfg and /etc/pve/priv/notifications.cfg -
the latter contains sensitive configuration options such as
passwords or authentication tokens for notification targets and can
only be read by root.
通知系统可以在 GUI 中通过数据中心 → 通知进行配置。配置存储在 /etc/pve/notifications.cfg 和 /etc/pve/priv/notifications.cfg 中——后者包含敏感的配置选项,如通知目标的密码或认证令牌,仅 root 用户可读取。
17.2. Notification Targets
17.2. 通知目标
Proxmox VE offers multiple types of notification targets.
Proxmox VE 提供多种类型的通知目标。
17.2.1. Sendmail
The sendmail binary is a program commonly found on Unix-like operating systems
that handles the sending of email messages.
It is a command-line utility that allows users and applications to send emails
directly from the command line or from within scripts.
sendmail 二进制程序是常见于类 Unix 操作系统上的一个程序,用于处理电子邮件的发送。它是一个命令行工具,允许用户和应用程序直接从命令行或脚本中发送电子邮件。
The sendmail notification target uses the sendmail binary to send emails to a
list of configured users or email addresses. If a user is selected as a recipient,
the email address configured in user’s settings will be used.
For the root@pam user, this is the email address entered during installation.
A user’s email address can be configured in
Datacenter → Permissions → Users.
If a user has no associated email address, no email will be sent.
sendmail 通知目标使用 sendmail 二进制文件向配置的用户或电子邮件地址列表发送邮件。如果选择了某个用户作为接收者,将使用该用户设置中配置的电子邮件地址。对于 root@pam 用户,使用的是安装过程中输入的电子邮件地址。用户的电子邮件地址可以在数据中心 → 权限 → 用户中配置。如果用户没有关联的电子邮件地址,则不会发送邮件。
|
|
In standard Proxmox VE installations, the sendmail binary is provided by
Postfix. It may be necessary to configure Postfix so that it can deliver
mails correctly - for example by setting an external mail relay (smart host).
In case of failed delivery, check the system logs for messages logged by
the Postfix daemon. 在标准的 Proxmox VE 安装中,sendmail 二进制文件由 Postfix 提供。可能需要配置 Postfix 以确保邮件能够正确投递——例如通过设置外部邮件中继(智能主机)。如果投递失败,请检查系统日志中 Postfix 守护进程记录的消息。 |
The configuration for Sendmail target plugins has the following options:
Sendmail 目标插件的配置具有以下选项:
-
mailto: E-Mail address to which the notification shall be sent to. Can be set multiple times to accommodate multiple recipients.
mailto:通知将发送到的电子邮件地址。可以多次设置以支持多个接收者。 -
mailto-user: Users to which emails shall be sent to. The user’s email address will be looked up in users.cfg. Can be set multiple times to accommodate multiple recipients.
mailto-user:邮件将发送给的用户。用户的电子邮件地址将在 users.cfg 中查找。可以多次设置以适应多个收件人。 -
author: Sets the author of the E-Mail. Defaults to Proxmox VE.
author:设置电子邮件的作者。默认值为 Proxmox VE。 -
from-address: Sets the from address of the E-Mail. If the parameter is not set, the plugin will fall back to the email_from setting from datacenter.cfg. If that is also not set, the plugin will default to root@$hostname, where $hostname is the hostname of the node.
from-address:设置电子邮件的发件人地址。如果未设置此参数,插件将回退到 datacenter.cfg 中的 email_from 设置。如果该设置也未配置,插件将默认使用 root@$hostname,其中 $hostname 是节点的主机名。 -
comment: Comment for this target The From header in the email will be set to $author <$from-address>.
comment:此目标的注释。电子邮件中的发件人头部将设置为 $author <$from-address>。
Example configuration (/etc/pve/notifications.cfg):
示例配置(/etc/pve/notifications.cfg):
sendmail: example
mailto-user root@pam
mailto-user admin@pve
mailto max@example.com
from-address pve1@example.com
comment Send to multiple users/addresses
17.2.2. SMTP
SMTP notification targets can send emails directly to an SMTP mail relay.
This target does not use the system’s MTA to deliver emails.
Similar to sendmail targets, if a user is selected as a recipient, the user’s configured
email address will be used.
SMTP 通知目标可以直接向 SMTP 邮件中继发送电子邮件。该目标不使用系统的 MTA 来发送邮件。与 sendmail 目标类似,如果选择了用户作为收件人,将使用该用户配置的电子邮件地址。
|
|
Unlike sendmail targets, SMTP targets do not have any queuing/retry mechanism
in case of a failed mail delivery. 与 sendmail 目标不同,SMTP 目标在邮件发送失败时没有任何排队/重试机制。 |
The configuration for SMTP target plugins has the following options:
SMTP 目标插件的配置具有以下选项:
-
mailto: E-Mail address to which the notification shall be sent to. Can be set multiple times to accommodate multiple recipients.
mailto:通知将发送到的电子邮件地址。可以设置多次以支持多个收件人。 -
mailto-user: Users to which emails shall be sent to. The user’s email address will be looked up in users.cfg. Can be set multiple times to accommodate multiple recipients.
mailto-user:将接收邮件的用户。用户的电子邮件地址将在 users.cfg 中查找。可以设置多次以支持多个收件人。 -
author: Sets the author of the E-Mail. Defaults to Proxmox VE.
author:设置电子邮件的发件人。默认值为 Proxmox VE。 -
from-address: Sets the From-address of the email. SMTP relays might require that this address is owned by the user in order to avoid spoofing. The From header in the email will be set to $author <$from-address>.
from-address:设置电子邮件的发件人地址。SMTP 中继可能要求该地址归用户所有,以避免伪造。电子邮件中的发件人头部将被设置为 $author <$from-address>。 -
username: Username to use during authentication. If no username is set, no authentication will be performed. The PLAIN and LOGIN authentication methods are supported.
username:认证时使用的用户名。如果未设置用户名,则不会执行认证。支持 PLAIN 和 LOGIN 认证方法。 -
password: Password to use when authenticating.
password:认证时使用的密码。 -
mode: Sets the encryption mode (insecure, starttls or tls). Defaults to tls.
mode:设置加密模式(insecure、starttls 或 tls)。默认值为 tls。 -
server: Address/IP of the SMTP relay
server:SMTP 中继的地址/IP -
port: The port to connect to. If not set, the used port defaults to 25 (insecure), 465 (tls) or 587 (starttls), depending on the value of mode.
port:连接端口。如果未设置,所使用的端口将根据 mode 的值默认为 25(不安全)、465(tls)或 587(starttls)。 -
comment: Comment for this target
comment:该目标的备注
Example configuration (/etc/pve/notifications.cfg):
示例配置(/etc/pve/notifications.cfg):
smtp: example
mailto-user root@pam
mailto-user admin@pve
mailto max@example.com
from-address pve1@example.com
username pve1
server mail.example.com
mode starttls
The matching entry in /etc/pve/priv/notifications.cfg, containing the
secret token:
/etc/pve/priv/notifications.cfg 中的匹配条目,包含秘密代币:
smtp: example
password somepassword
17.2.3. Gotify
Gotify is an open-source self-hosted notification server that
allows you to send and receive push notifications to various devices and
applications. It provides a simple API and web interface, making it easy to
integrate with different platforms and services.
Gotify 是一个开源的自托管通知服务器,允许您向各种设备和应用程序发送和接收推送通知。它提供了简单的 API 和网页界面,使其易于与不同的平台和服务集成。
The configuration for Gotify target plugins has the following options:
Gotify 目标插件的配置具有以下选项:
-
server: The base URL of the Gotify server, e.g. http://<ip>:8888
server:Gotify 服务器的基础 URL,例如 http://<ip>:8888 -
token: The authentication token. Tokens can be generated within the Gotify web interface.
token:认证代币。代币可以在 Gotify 网页界面中生成。 -
comment: Comment for this target
comment:此目标的备注
|
|
The Gotify target plugin will respect the HTTP proxy settings from the
datacenter configuration Gotify 目标插件将遵循数据中心配置中的 HTTP 代理设置。 |
Example configuration (/etc/pve/notifications.cfg):
示例配置(/etc/pve/notifications.cfg):
gotify: example
server http://gotify.example.com:8888
comment Send to multiple users/addresses
The matching entry in /etc/pve/priv/notifications.cfg, containing the
secret token:
/etc/pve/priv/notifications.cfg 中的匹配条目,包含秘密代币:
gotify: example
token somesecrettoken
17.2.4. Webhook
Webhook notification targets perform HTTP requests to a configurable URL.
Webhook 通知目标对可配置的 URL 执行 HTTP 请求。
The following configuration options are available:
以下配置选项可用:
-
url: The URL to which to perform the HTTP requests. Supports templating to inject message contents, metadata and secrets.
url:执行 HTTP 请求的 URL。支持模板以注入消息内容、元数据和密钥。 -
method: HTTP Method to use (POST/PUT/GET)
method:使用的 HTTP 方法(POST/PUT/GET) -
header: Array of HTTP headers that should be set for the request. Supports templating to inject message contents, metadata and secrets.
header:应为请求设置的 HTTP 头数组。支持模板以注入消息内容、元数据和密钥。 -
body: HTTP body that should be sent. Supports templating to inject message contents, metadata and secrets.
body:应发送的 HTTP 正文。支持模板功能,可注入消息内容、元数据和密钥。 -
secret: Array of secret key-value pairs. These will be stored in a protected configuration file only readable by root. Secrets can be accessed in body/header/URL templates via the secrets namespace.
secret:密钥值对数组。这些将存储在仅 root 可读的受保护配置文件中。可以通过 secrets 命名空间在正文/头部/URL 模板中访问密钥。 -
comment: Comment for this target.
comment:此目标的注释。
For configuration options that support templating, the
Handlebars syntax can be used to
access the following properties:
对于支持模板的配置选项,可以使用 Handlebars 语法访问以下属性:
-
{{ title }}: The rendered notification title
{{ title }}:渲染后的通知标题 -
{{ message }}: The rendered notification body
{{ message }}:渲染后的通知内容 -
{{ severity }}: The severity of the notification (info, notice, warning, error, unknown)
{{ severity }}:通知的严重级别(信息、通知、警告、错误、未知) -
{{ timestamp }}: The notification’s timestamp as a UNIX epoch (in seconds).
{{ timestamp }}:通知的时间戳,UNIX 纪元时间(秒) -
{{ fields.<name> }}: Sub-namespace for any metadata fields of the notification. For instance, fields.type contains the notification type - for all available fields refer to Notification Events.
{{ fields.<name> }}:通知的任何元数据字段的子命名空间。例如,fields.type 包含通知类型——有关所有可用字段,请参阅通知事件。 -
{{ secrets.<name> }}: Sub-namespace for secrets. For instance, a secret named token is accessible via secrets.token.
{{ secrets.<name> }}:秘密的子命名空间。例如,名为 token 的秘密可以通过 secrets.token 访问。
For convenience, the following helpers are available:
为了方便,提供了以下辅助工具:
-
{{ url-encode <value/property> }}: URL-encode a property/literal.
{{ url-encode <value/property> }}:对属性/字面值进行 URL 编码。 -
{{ escape <value/property> }}: Escape any control characters that cannot be safely represented as a JSON string.
{{ escape <value/property> }}:转义任何无法安全表示为 JSON 字符串的控制字符。 -
{{ json <value/property> }}: Render a value as JSON. This can be useful to pass a whole sub-namespace (e.g. fields) as a part of a JSON payload (e.g. {{ json fields }}).
{{ json <value/property> }}:将一个值渲染为 JSON。这对于将整个子命名空间(例如 fields)作为 JSON 负载的一部分传递非常有用(例如 {{ json fields }})。
Examples 示例
ntfy.sh
-
Method: POST 方法:POST
-
URL: https://ntfy.sh/{{ secrets.channel }}
网址:https://ntfy.sh/{{ secrets.channel }} -
Headers: 请求头:
-
Markdown: Yes Markdown:是
-
-
Body: 正文:
```
{{ message }}
```
-
Secrets: 秘密:
-
channel: <your ntfy.sh channel>
频道:<your ntfy.sh channel>
-
Discord
-
Method: POST 方法:POST
-
URL: https://discord.com/api/webhooks/{{ secrets.token }}
网址:https://discord.com/api/webhooks/{{ secrets.token }} -
Headers: 请求头:
-
Content-Type: application/json
内容类型:application/json
-
-
Body: 正文:
{
"content": "``` {{ escape message }}```"
}
-
Secrets: 机密:
-
token: <token> 代币:<token>
-
Slack
-
Method: POST 方法:POST
-
URL: https://hooks.slack.com/services/{{ secrets.token }}
网址:https://hooks.slack.com/services/{{ secrets.token }} -
Headers: 请求头:
-
Content-Type: application/json
内容类型:application/json
-
-
Body: 正文:
{
"text": "``` {{escape message}}```",
"type": "mrkdwn"
}
-
Secrets: 机密:
-
token: <token> 代币:<token>
-
17.3. Notification Matchers
17.3. 通知匹配器
Notification matchers route notifications to notification targets based
on their matching rules. These rules can match certain properties of a
notification, such as the timestamp (match-calendar), the severity of
the notification (match-severity) or metadata fields (match-field).
If a notification is matched by a matcher, all targets configured for the
matcher will receive the notification.
通知匹配器根据其匹配规则将通知路由到通知目标。这些规则可以匹配通知的某些属性,例如时间戳(match-calendar)、通知的严重性(match-severity)或元数据字段(match-field)。如果通知被匹配器匹配,所有为该匹配器配置的目标都会收到该通知。
An arbitrary number of matchers can be created, each with with their own
matching rules and targets to notify.
Every target is notified at most once for every notification, even if
the target is used in multiple matchers.
可以创建任意数量的匹配器,每个匹配器都有自己的匹配规则和要通知的目标。每个目标对于每条通知最多只会被通知一次,即使该目标被多个匹配器使用。
A matcher without any matching rules is always true; the configured targets
will always be notified.
没有任何匹配规则的匹配器始终为真;配置的目标将始终被通知。
matcher: always-matches
target admin
comment This matcher always matches
17.3.1. Matcher Options 17.3.1. 匹配器选项
-
target: Determine which target should be notified if the matcher matches. can be used multiple times to notify multiple targets.
target:确定如果匹配器匹配,应通知哪个目标。可以多次使用以通知多个目标。 -
invert-match: Inverts the result of the whole matcher
invert-match:反转整个匹配器的结果。 -
mode: Determines how the individual match rules are evaluated to compute the result for the whole matcher. If set to all, all matching rules must match. If set to any, at least one rule must match. a matcher must be true. Defaults to all.
mode:确定如何评估各个匹配规则以计算整个匹配器的结果。如果设置为 all,则所有匹配规则必须匹配。如果设置为 any,则至少一个规则必须匹配。匹配器必须为真。默认值为 all。 -
match-calendar: Match the notification’s timestamp against a schedule
match-calendar:将通知的时间戳与日程表进行匹配。 -
match-field: Match the notification’s metadata fields
match-field:匹配通知的元数据字段 -
match-severity: Match the notification’s severity
match-severity:匹配通知的严重性 -
comment: Comment for this matcher
comment:此匹配器的注释
17.3.2. Calendar Matching Rules
17.3.2. 日历匹配规则
A calendar matcher matches the time when a notification is sent against a
configurable schedule.
日历匹配器将通知发送的时间与可配置的时间表进行匹配。
-
match-calendar 8-12
-
match-calendar 8:00-15:30
-
match-calendar mon-fri 9:00-17:00
match-calendar 周一至周五 9:00-17:00 -
match-calendar sun,tue-wed,fri 9-17
17.3.3. Field Matching Rules
17.3.3. 字段匹配规则
Notifications have a selection of metadata fields that can be matched.
When using exact as a matching mode, a , can be used as a separator.
The matching rule then matches if the metadata field has any of the specified
values.
通知有一组选项元数据字段可以匹配。使用 exact 作为匹配模式时,可以使用逗号(,)作为分隔符。匹配规则会在元数据字段包含任意指定值时匹配成功。
-
match-field exact:type=vzdump Only match notifications about backups.
match-field exact:type=vzdump 仅匹配关于备份的通知。 -
match-field exact:type=replication,fencing Match replication and fencing notifications.
match-field exact:type=replication,fencing 匹配复制和隔离通知。 -
match-field regex:hostname=^.+\.example\.com$ Match the hostname of the node.
match-field regex:hostname=^.+\.example\.com$ 匹配节点的主机名。
If a matched metadata field does not exist, the notification will not be
matched.
For instance, a match-field regex:hostname=.* directive will only match
notifications that have an arbitrary hostname metadata field, but will
not match if the field does not exist.
如果匹配的元数据字段不存在,则通知不会被匹配。例如,match-field regex:hostname=.* 指令只会匹配具有任意主机名元数据字段的通知,但如果该字段不存在,则不会匹配。
17.3.4. Severity Matching Rules
17.3.4. 严重性匹配规则
A notification has a associated severity that can be matched.
通知具有可匹配的关联严重性。
-
match-severity error: Only match errors
match-severity error:仅匹配错误 -
match-severity warning,error: Match warnings and error
match-severity warning,error:匹配警告和错误
The following severities are in use:
info, notice, warning, error, unknown.
以下严重性等级正在使用:info、notice、warning、error、unknown。
17.3.5. Examples 17.3.5. 示例
matcher: workday
match-calendar mon-fri 9-17
target admin
comment Notify admins during working hours
matcher: night-and-weekend
match-calendar mon-fri 9-17
invert-match true
target on-call-admins
comment Separate target for non-working hours
matcher: backup-failures
match-field exact:type=vzdump
match-severity error
target backup-admins
comment Send notifications about backup failures to one group of admins
matcher: cluster-failures
match-field exact:type=replication,fencing
target cluster-admins
comment Send cluster-related notifications to other group of admins
17.4. Notification Events
17.4. 通知事件
| Event 事件 | type 类型 | Severity 严重性 | Metadata fields (in addition to type) 元数据字段(除类型外) |
|---|---|---|---|
System updates available 系统更新可用 |
package-updates 包更新 |
info 信息 |
hostname 主机名 |
Cluster node fenced 集群节点被隔离 |
fencing 隔离 |
error 错误 |
hostname 主机名 |
Storage replication job failed |
replication 复制 |
error 错误 |
hostname, job-id 主机名,作业 ID |
Backup succeeded 备份成功 |
vzdump |
info 信息 |
hostname, job-id (only for backup jobs) |
Backup failed 备份失败 |
vzdump |
error 错误 |
hostname, job-id (only for backup jobs) |
Mail for root root 的邮件 |
system-mail 系统邮件 |
unknown 未知 |
hostname 主机名 |
| Field name 字段名称 | Description 描述 |
|---|---|
type 类型 |
Type of the notification 通知类型 |
hostname 主机名 |
Hostname, without domain (e.g. pve1) |
job-id 作业 ID |
Job ID 作业 ID |
|
|
Backup job notifications only have job-id set if the backup job
was executed automatically based on its schedule, but not if it was triggered
manually by the Run now button in the UI. 备份作业通知中只有在备份作业根据其计划自动执行时才会设置作业 ID,而如果是通过界面中的“立即运行”按钮手动触发,则不会设置。 |
17.5. System Mail Forwarding
17.5. 系统邮件转发
Certain local system daemons, such as smartd, generate notification emails
that are initially directed to the local root user. Proxmox VE will
feed these mails into the notification system as a notification of
type system-mail and with severity unknown.
某些本地系统守护进程,如 smartd,会生成通知邮件,这些邮件最初会发送给本地 root 用户。Proxmox VE 会将这些邮件作为类型为 system-mail、严重性为未知的通知输入通知系统。
When the email is forwarded to a sendmail target, the mail’s content and headers
are forwarded as-is. For all other targets,
the system tries to extract both a subject line and the main text body
from the email content. In instances where emails solely consist of HTML
content, they will be transformed into plain text format during this process.
当邮件被转发到 sendmail 目标时,邮件的内容和头信息会原样转发。对于所有其他目标,系统会尝试从邮件内容中提取主题行和正文内容。如果邮件仅包含 HTML 内容,则在此过程中会将其转换为纯文本格式。
17.6. Permissions 17.6. 权限
To modify/view the configuration for notification targets,
the Mapping.Modify/Mapping.Audit permissions are required for the
/mapping/notifications ACL node.
要修改/查看通知目标的配置,需要对 /mapping/notifications ACL 节点拥有 Mapping.Modify/Mapping.Audit 权限。
Testing a target requires Mapping.Use, Mapping.Audit or Mapping.Modify
permissions on /mapping/notifications
测试目标需要在 /mapping/notifications 上具有 Mapping.Use、Mapping.Audit 或 Mapping.Modify 权限
17.7. Notification Mode 17.7. 通知模式
A backup job configuration has the notification-mode
option which can have one of three values.
备份任务配置中有 notification-mode 选项,该选项可以有三种取值之一。
-
auto: Use the legacy-sendmail mode if no email address is entered in the mailto/Send email to field. If no email address is entered, the notification-system mode is used.
auto:如果在 mailto/发送邮件到 字段中未输入电子邮件地址,则使用 legacy-sendmail 模式。如果未输入电子邮件地址,则使用 notification-system 模式。 -
legacy-sendmail: Send notification emails via the system’s sendmail command. The notification system will be bypassed and any configured targets/matchers will be ignored. This mode is equivalent to the notification behavior for version before Proxmox VE 8.1 .
legacy-sendmail:通过系统的 sendmail 命令发送通知邮件。通知系统将被绕过,任何配置的目标/匹配器将被忽略。此模式等同于 Proxmox VE 8.1 之前版本的通知行为。 -
notification-system: Use the new, flexible notification system.
notification-system:使用新的灵活通知系统。
If the notification-mode option is not set, Proxmox VE will default
to auto.
如果未设置 notification-mode 选项,Proxmox VE 将默认使用 auto。
The legacy-sendmail mode might be removed in a later release of
Proxmox VE.
legacy-sendmail 模式可能会在 Proxmox VE 的后续版本中被移除。
18. Important Service Daemons
18. 重要的服务守护进程
18.1. pvedaemon - Proxmox VE API Daemon
18.1. pvedaemon - Proxmox VE API 守护进程
This daemon exposes the whole Proxmox VE API on 127.0.0.1:85. It runs as
root and has permission to do all privileged operations.
该守护进程在 127.0.0.1:85 上暴露整个 Proxmox VE API。它以 root 身份运行,拥有执行所有特权操作的权限。
|
|
The daemon listens to a local address only, so you cannot access
it from outside. The pveproxy daemon exposes the API to the outside
world. 该守护进程仅监听本地地址,因此无法从外部访问。pveproxy 守护进程负责将 API 暴露给外部。 |
18.2. pveproxy - Proxmox VE API Proxy Daemon
18.2. pveproxy - Proxmox VE API 代理守护进程
This daemon exposes the whole Proxmox VE API on TCP port 8006 using HTTPS. It runs
as user www-data and has very limited permissions. Operation requiring more
permissions are forwarded to the local pvedaemon.
该守护进程通过 HTTPS 在 TCP 端口 8006 上暴露整个 Proxmox VE API。它以用户 www-data 运行,权限非常有限。需要更高权限的操作会转发给本地的 pvedaemon。
Requests targeted for other nodes are automatically forwarded to those nodes.
This means that you can manage your whole cluster by connecting to a single
Proxmox VE node.
针对其他节点的请求会自动转发到相应节点。这意味着您可以通过连接到单个 Proxmox VE 节点来管理整个集群。
18.2.1. Host based Access Control
18.2.1. 基于主机的访问控制
It is possible to configure “apache2”-like access control lists. Values are
read from file /etc/default/pveproxy. For example:
可以配置类似“apache2”的访问控制列表。值从文件 /etc/default/pveproxy 中读取。例如:
ALLOW_FROM="10.0.0.1-10.0.0.5,192.168.0.0/22" DENY_FROM="all" POLICY="allow"
IP addresses can be specified using any syntax understood by Net::IP. The
name all is an alias for 0/0 and ::/0 (meaning all IPv4 and IPv6
addresses).
IP 地址可以使用 Net::IP 支持的任何语法指定。名称 all 是 0/0 和 ::/0 的别名(表示所有 IPv4 和 IPv6 地址)。
The default policy is allow.
默认策略是允许。
| Match 匹配 | POLICY=deny 策略=拒绝 | POLICY=allow 策略=允许 |
|---|---|---|
Match Allow only 仅允许匹配 |
allow 允许 |
allow 允许 |
Match Deny only 仅匹配拒绝 |
deny 拒绝 |
deny 拒绝 |
No match 无匹配 |
deny 拒绝 |
allow 允许 |
Match Both Allow & Deny |
deny 拒绝 |
allow 允许 |
18.2.2. Listening IP Address
18.2.2. 监听 IP 地址
By default the pveproxy and spiceproxy daemons listen on the wildcard
address and accept connections from both IPv4 and IPv6 clients.
默认情况下,pveproxy 和 spiceproxy 守护进程监听通配符地址,并接受来自 IPv4 和 IPv6 客户端的连接。
By setting LISTEN_IP in /etc/default/pveproxy you can control to which IP
address the pveproxy and spiceproxy daemons bind. The IP-address needs to
be configured on the system.
通过在 /etc/default/pveproxy 中设置 LISTEN_IP,您可以控制 pveproxy 和 spiceproxy 守护进程绑定到哪个 IP 地址。该 IP 地址需要在系统上配置。
Setting the sysctl net.ipv6.bindv6only to the non-default 1 will cause
the daemons to only accept connection from IPv6 clients, while usually also
causing lots of other issues. If you set this configuration we recommend to
either remove the sysctl setting, or set the LISTEN_IP to 0.0.0.0 (which
will only allow IPv4 clients).
将 sysctl net.ipv6.bindv6only 设置为非默认值 1 会导致守护进程仅接受来自 IPv6 客户端的连接,同时通常还会引发许多其他问题。如果您设置了此配置,建议要么移除该 sysctl 设置,要么将 LISTEN_IP 设置为 0.0.0.0(这将只允许 IPv4 客户端)。
LISTEN_IP can be used to only to restricting the socket to an internal
interface and thus have less exposure to the public internet, for example:
LISTEN_IP 也可以用来仅限制套接字绑定到内部接口,从而减少对公共互联网的暴露,例如:
LISTEN_IP="192.0.2.1"
Similarly, you can also set an IPv6 address:
同样,您也可以设置一个 IPv6 地址:
LISTEN_IP="2001:db8:85a3::1"
Note that if you want to specify a link-local IPv6 address, you need to provide
the interface name itself. For example:
请注意,如果您想指定一个链路本地 IPv6 地址,您需要提供接口名称本身。例如:
LISTEN_IP="fe80::c463:8cff:feb9:6a4e%vmbr0"
|
|
The nodes in a cluster need access to pveproxy for communication,
possibly on different sub-nets. It is not recommended to set LISTEN_IP on
clustered systems. 集群中的节点需要访问 pveproxy 以进行通信,可能位于不同的子网中。不建议在集群系统上设置 LISTEN_IP。 |
To apply the change you need to either reboot your node or fully restart the
pveproxy and spiceproxy service:
要应用更改,您需要重启节点或完全重启 pveproxy 和 spiceproxy 服务:
systemctl restart pveproxy.service spiceproxy.service
|
|
Unlike reload, a restart of the pveproxy service can interrupt some
long-running worker processes, for example a running console or shell from a
virtual guest. So, please use a maintenance window to bring this change in
effect. 与重新加载不同,重启 pveproxy 服务可能会中断一些长时间运行的工作进程,例如来自虚拟客户机的正在运行的控制台或 Shell。因此,请在维护时间窗口内进行此更改。 |
18.2.3. SSL Cipher Suite
18.2.3. SSL 密码套件
You can define the cipher list in /etc/default/pveproxy via the CIPHERS
(TLS ⇐ 1.2) and CIPHERSUITES (TLS >= 1.3) keys. For example
您可以通过 /etc/default/pveproxy 中的 CIPHERS(TLS ⇐ 1.2)和 CIPHERSUITES(TLS >= 1.3)键来定义密码列表。例如
CIPHERS="ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256" CIPHERSUITES="TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256"
Above is the default. See the ciphers(1) man page from the openssl
package for a list of all available options.
以上是默认设置。有关所有可用选项的列表,请参阅 openssl 包中的 ciphers(1) 手册页。
Additionally, you can set the client to choose the cipher used in
/etc/default/pveproxy (default is the first cipher in the list available to
both client and pveproxy):
此外,您可以设置客户端在 /etc/default/pveproxy 中选择使用的密码(默认是客户端和 pveproxy 都可用的列表中的第一个密码):
HONOR_CIPHER_ORDER=0
18.2.4. Supported TLS versions
18.2.4. 支持的 TLS 版本
The insecure SSL versions 2 and 3 are unconditionally disabled for pveproxy.
TLS versions below 1.1 are disabled by default on recent OpenSSL versions,
which is honored by pveproxy (see /etc/ssl/openssl.cnf).
不安全的 SSL 版本 2 和 3 在 pveproxy 中被无条件禁用。较新的 OpenSSL 版本默认禁用低于 1.1 的 TLS 版本,pveproxy 也遵循这一设置(参见 /etc/ssl/openssl.cnf)。
To disable TLS version 1.2 or 1.3, set the following in /etc/default/pveproxy:
要禁用 TLS 版本 1.2 或 1.3,请在 /etc/default/pveproxy 中设置以下内容:
DISABLE_TLS_1_2=1
or, respectively: 或者,分别设置:
DISABLE_TLS_1_3=1
|
|
Unless there is a specific reason to do so, it is not recommended to
manually adjust the supported TLS versions. 除非有特定原因,否则不建议手动调整支持的 TLS 版本。 |
18.2.5. Diffie-Hellman Parameters
18.2.5. Diffie-Hellman 参数
You can define the used Diffie-Hellman parameters in
/etc/default/pveproxy by setting DHPARAMS to the path of a file
containing DH parameters in PEM format, for example
您可以通过在 /etc/default/pveproxy 中设置 DHPARAMS 为包含 PEM 格式 DH 参数的文件路径,来定义使用的 Diffie-Hellman 参数,例如
DHPARAMS="/path/to/dhparams.pem"
If this option is not set, the built-in skip2048 parameters will be
used.
如果未设置此选项,将使用内置的 skip2048 参数。
|
|
DH parameters are only used if a cipher suite utilizing the DH key
exchange algorithm is negotiated. 只有在协商使用 DH 密钥交换算法的密码套件时,才会使用 DH 参数。 |
18.2.6. Alternative HTTPS certificate
18.2.6. 替代 HTTPS 证书
You can change the certificate used to an external one or to one obtained via
ACME.
您可以将使用的证书更改为外部证书或通过 ACME 获取的证书。
pveproxy uses /etc/pve/local/pveproxy-ssl.pem and
/etc/pve/local/pveproxy-ssl.key, if present, and falls back to
/etc/pve/local/pve-ssl.pem and /etc/pve/local/pve-ssl.key.
The private key may not use a passphrase.
pveproxy 使用 /etc/pve/local/pveproxy-ssl.pem 和 /etc/pve/local/pveproxy-ssl.key(如果存在),否则回退到 /etc/pve/local/pve-ssl.pem 和 /etc/pve/local/pve-ssl.key。私钥不得使用密码短语。
It is possible to override the location of the certificate private key
/etc/pve/local/pveproxy-ssl.key by setting TLS_KEY_FILE in
/etc/default/pveproxy, for example:
可以通过在 /etc/default/pveproxy 中设置 TLS_KEY_FILE 来覆盖证书私钥 /etc/pve/local/pveproxy-ssl.key 的位置,例如:
TLS_KEY_FILE="/secrets/pveproxy.key"
|
|
The included ACME integration does not honor this setting. 内置的 ACME 集成不支持此设置。 |
See the Host System Administration chapter of the documentation for details.
详情请参阅文档中的主机系统管理章节。
18.2.7. Response Compression
18.2.7. 响应压缩
By default pveproxy uses gzip HTTP-level compression for compressible
content, if the client supports it. This can disabled in /etc/default/pveproxy
默认情况下,pveproxy 对可压缩内容使用 gzip HTTP 级别压缩,前提是客户端支持该功能。此功能可以在 /etc/default/pveproxy 中禁用。
COMPRESSION=0
18.2.8. Real Client IP Logging
18.2.8. 真实客户端 IP 日志记录
By default, pveproxy logs the IP address of the client that sent the request.
In cases where a proxy server is in front of pveproxy, it may be desirable to
log the IP of the client making the request instead of the proxy IP.
默认情况下,pveproxy 会记录发送请求的客户端的 IP 地址。在 pveproxy 前面有代理服务器的情况下,可能希望记录发出请求的客户端的 IP,而不是代理的 IP。
To enable processing of a HTTP header set by the proxy for logging purposes, set
PROXY_REAL_IP_HEADER to the name of the header to retrieve the client IP from. For
example:
要启用处理代理设置的用于日志记录的 HTTP 头,请将 PROXY_REAL_IP_HEADER 设置为用于获取客户端 IP 的头名称。例如:
PROXY_REAL_IP_HEADER="X-Forwarded-For"
Any invalid values passed in this header will be ignored.
传入此头部的任何无效值将被忽略。
The default behavior is log the value in this header on all incoming requests.
To define a list of proxy servers that should be trusted to set the above HTTP
header, set PROXY_REAL_IP_ALLOW_FROM, for example:
默认行为是在所有传入请求中记录此头部的值。要定义应被信任设置上述 HTTP 头部的代理服务器列表,请设置 PROXY_REAL_IP_ALLOW_FROM,例如:
PROXY_REAL_IP_ALLOW_FROM="192.168.0.2"
The PROXY_REAL_IP_ALLOW_FROM setting also supports values similar to the ALLOW_FROM
and DENY_FROM settings.
PROXY_REAL_IP_ALLOW_FROM 设置也支持类似于 ALLOW_FROM 和 DENY_FROM 设置的值。
IP addresses can be specified using any syntax understood by Net::IP. The
name all is an alias for 0/0 and ::/0 (meaning all IPv4 and IPv6
addresses).
IP 地址可以使用 Net::IP 支持的任何语法指定。名称 all 是 0/0 和 ::/0 的别名(表示所有 IPv4 和 IPv6 地址)。
18.3. pvestatd - Proxmox VE Status Daemon
18.3. pvestatd - Proxmox VE 状态守护进程
This daemon queries the status of VMs, storages and containers at
regular intervals. The result is sent to all nodes in the cluster.
该守护进程定期查询虚拟机、存储和容器的状态。结果会发送到集群中的所有节点。
18.4. spiceproxy - SPICE Proxy Service
18.4. spiceproxy - SPICE 代理服务
SPICE (the Simple Protocol for Independent
Computing Environments) is an open remote computing solution,
providing client access to remote displays and devices (e.g. keyboard,
mouse, audio). The main use case is to get remote access to virtual
machines and container.
SPICE(独立计算环境简单协议)是一种开放的远程计算解决方案,提供客户端对远程显示和设备(如键盘、鼠标、音频)的访问。主要用途是远程访问虚拟机和容器。
This daemon listens on TCP port 3128, and implements an HTTP proxy to
forward CONNECT request from the SPICE client to the correct Proxmox VE
VM. It runs as user www-data and has very limited permissions.
该守护进程监听 TCP 端口 3128,实现了一个 HTTP 代理,用于将来自 SPICE 客户端的 CONNECT 请求转发到正确的 Proxmox VE 虚拟机。它以用户 www-data 身份运行,权限非常有限。
18.5. pvescheduler - Proxmox VE Scheduler Daemon
18.5. pvescheduler - Proxmox VE 调度守护进程
This daemon is responsible for starting jobs according to the schedule,
such as replication and vzdump jobs.
该守护进程负责根据计划启动任务,例如复制和 vzdump 任务。
For vzdump jobs, it gets its configuration from the file /etc/pve/jobs.cfg
对于 vzdump 任务,它从文件 /etc/pve/jobs.cfg 获取配置
19. Useful Command-line Tools
19. 有用的命令行工具
19.1. pvesubscription - Subscription Management
19.1. pvesubscription - 订阅管理
This tool is used to handle Proxmox VE subscriptions.
此工具用于管理 Proxmox VE 订阅。
19.2. pveperf - Proxmox VE Benchmark Script
19.2. pveperf - Proxmox VE 基准测试脚本
Tries to gather some CPU/hard disk performance data on the hard disk
mounted at PATH (/ is used as default):
尝试收集挂载在 PATH 上的硬盘的 CPU/硬盘性能数据(默认使用 /):
- CPU BOGOMIPS
-
bogomips sum of all CPUs
所有 CPU 的 bogomips 总和 - REGEX/SECOND 每秒正则表达式数
-
regular expressions per second (perl performance test), should be above 300000
每秒正则表达式数(perl 性能测试),应高于 300000 - HD SIZE 硬盘大小
-
hard disk size 硬盘大小
- BUFFERED READS 缓冲读取
-
simple HD read test. Modern HDs should reach at least 40 MB/sec
简单硬盘读取测试。现代硬盘的速度应至少达到 40 MB/秒 - AVERAGE SEEK TIME 平均寻道时间
-
tests average seek time. Fast SCSI HDs reach values < 8 milliseconds. Common IDE/SATA disks get values from 15 to 20 ms.
测试平均寻道时间。快速的 SCSI 硬盘可达到小于 8 毫秒的数值。常见的 IDE/SATA 硬盘的数值在 15 到 20 毫秒之间。 - FSYNCS/SECOND 每秒 FSYNC 次数
-
value should be greater than 200 (you should enable write back cache mode on you RAID controller - needs a battery backed cache (BBWC)).
数值应大于 200(你应该在 RAID 控制器上启用写回缓存模式——需要电池备份缓存(BBWC))。 - DNS EXT DNS 扩展
-
average time to resolve an external DNS name
解析外部 DNS 名称的平均时间 - DNS INT
-
average time to resolve a local DNS name
解析本地 DNS 名称的平均时间
19.3. Shell interface for the Proxmox VE API
19.3. Proxmox VE API 的 Shell 接口
The Proxmox VE management tool (pvesh) allows to directly invoke API
function, without using the REST/HTTPS server.
Proxmox VE 管理工具(pvesh)允许直接调用 API 功能,无需使用 REST/HTTPS 服务器。
|
|
Only root is allowed to do that. 只有 root 用户被允许这样做。 |
19.3.1. EXAMPLES 19.3.1. 示例
Get the list of nodes in my cluster
获取我的集群中的节点列表
# pvesh get /nodes
Get a list of available options for the datacenter
获取数据中心可用选项列表
# pvesh usage cluster/options -v
Set the HTMl5 NoVNC console as the default console for the datacenter
将 HTMl5 NoVNC 控制台设置为数据中心的默认控制台
# pvesh set cluster/options -console html5
20. Frequently Asked Questions
20. 常见问题解答
|
|
New FAQs are appended to the bottom of this section. 新的常见问题将添加到本节底部。 |
-
What distribution is Proxmox VE based on?
Proxmox VE 基于哪个发行版?Proxmox VE is based on Debian GNU/Linux
Proxmox VE 基于 Debian GNU/Linux -
What license does the Proxmox VE project use?
Proxmox VE 项目使用什么许可证?Proxmox VE code is licensed under the GNU Affero General Public License, version 3.
Proxmox VE 代码采用 GNU Affero 通用公共许可证第 3 版授权。 -
Will Proxmox VE run on a 32bit processor?
Proxmox VE 能运行在 32 位处理器上吗?Proxmox VE works only on 64-bit CPUs (AMD or Intel). There is no plan for 32-bit for the platform.
Proxmox VE 仅支持 64 位 CPU(AMD 或 Intel)。平台暂无 32 位版本的计划。VMs and Containers can be both 32-bit and 64-bit.
虚拟机和容器都可以是 32 位或 64 位。 -
Does my CPU support virtualization?
我的 CPU 支持虚拟化吗?To check if your CPU is virtualization compatible, check for the vmx or svm tag in this command output:
要检查您的 CPU 是否支持虚拟化,请在此命令输出中查找 vmx 或 svm 标签:egrep '(vmx|svm)' /proc/cpuinfo
-
Supported Intel CPUs 支持的 Intel CPU
64-bit processors with Intel Virtualization Technology (Intel VT-x) support. (List of processors with Intel VT and 64-bit)
支持 Intel 虚拟化技术(Intel VT-x)的 64 位处理器。(支持 Intel VT 和 64 位的处理器列表) -
Supported AMD CPUs 支持的 AMD CPU
64-bit processors with AMD Virtualization Technology (AMD-V) support.
支持 AMD 虚拟化技术(AMD-V)的 64 位处理器。 -
What is a container/virtual environment (VE)/virtual private server (VPS)?
什么是容器/虚拟环境(VE)/虚拟专用服务器(VPS)?In the context of containers, these terms all refer to the concept of operating-system-level virtualization. Operating-system-level virtualization is a method of virtualization, in which the kernel of an operating system allows for multiple isolated instances, that all share the kernel. When referring to LXC, we call such instances containers. Because containers use the host’s kernel rather than emulating a full operating system, they require less overhead, but are limited to Linux guests.
在容器的上下文中,这些术语都指操作系统级虚拟化的概念。操作系统级虚拟化是一种虚拟化方法,其中操作系统的内核允许多个隔离的实例共享同一个内核。提到 LXC 时,我们称这些实例为容器。由于容器使用主机的内核而不是模拟完整的操作系统,因此它们所需的开销较小,但仅限于 Linux 客户机。 -
What is a QEMU/KVM guest (or VM)?
什么是 QEMU/KVM 客户机(或虚拟机)?A QEMU/KVM guest (or VM) is a guest system running virtualized under Proxmox VE using QEMU and the Linux KVM kernel module.
QEMU/KVM 客户机(或虚拟机)是在 Proxmox VE 下使用 QEMU 和 Linux KVM 内核模块虚拟化运行的客户系统。 -
What is QEMU? 什么是 QEMU?
QEMU is a generic and open source machine emulator and virtualizer. QEMU uses the Linux KVM kernel module to achieve near native performance by executing the guest code directly on the host CPU. It is not limited to Linux guests but allows arbitrary operating systems to run.
QEMU 是一个通用的开源机器模拟器和虚拟化器。QEMU 使用 Linux KVM 内核模块,通过在主机 CPU 上直接执行客户机代码,实现接近原生的性能。它不仅限于 Linux 客户机,还允许运行任意操作系统。 -
How long will my Proxmox VE version be supported?
我的 Proxmox VE 版本将支持多久?Proxmox VE versions are supported at least as long as the corresponding Debian Version is oldstable. Proxmox VE uses a rolling release model and using the latest stable version is always recommended.
Proxmox VE 版本的支持时间至少与相应的 Debian 版本处于旧稳定版期间相同。Proxmox VE 采用滚动发布模式,始终推荐使用最新的稳定版本。Proxmox VE Version Proxmox VE 版本 Debian Version Debian 版本 First Release 首次发布 Debian EOL Debian 终止支持 Proxmox EOL Proxmox 终止支持 Proxmox VE 8
Debian 12 (Bookworm) Debian 12(Bookworm)
2023-06
tba 待定
tba 待定
Proxmox VE 7
Debian 11 (Bullseye) Debian 11(Bullseye)
2021-07
2024-07
2024-07
Proxmox VE 6
Debian 10 (Buster) Debian 10(Buster)
2019-07
2022-09
2022-09
Proxmox VE 5
Debian 9 (Stretch) Debian 9(Stretch)
2017-07
2020-07
2020-07
Proxmox VE 4
Debian 8 (Jessie) Debian 8(Jessie)
2015-10
2018-06
2018-06
Proxmox VE 3
Debian 7 (Wheezy) Debian 7(Wheezy)
2013-05
2016-04
2017-02
Proxmox VE 2
Debian 6 (Squeeze) Debian 6(Squeeze)
2012-04
2014-05
2014-05
Proxmox VE 1
Debian 5 (Lenny) Debian 5(Lenny)
2008-10
2012-03
2013-01
-
How can I upgrade Proxmox VE to the next point release?
如何将 Proxmox VE 升级到下一个小版本?Minor version upgrades, for example upgrading from Proxmox VE in version 7.1 to 7.2 or 7.3, can be done just like any normal update. But you should still check the release notes for any relevant notable, or breaking change.
小版本升级,例如从 Proxmox VE 7.1 升级到 7.2 或 7.3,可以像普通更新一样进行。但你仍然应该查看发布说明,了解任何相关的重要或破坏性变更。For the update itself use either the Web UI Node → Updates panel or through the CLI with:
对于更新本身,可以使用 Web UI 的节点 → 更新面板,或通过命令行界面执行:apt update apt full-upgrade
Always ensure you correctly setup the package repositories and only continue with the actual upgrade if apt update did not hit any error.
始终确保正确设置包仓库,并且只有在 apt update 没有出现任何错误时,才继续进行实际升级。 -
How can I upgrade Proxmox VE to the next major release?
如何将 Proxmox VE 升级到下一个主要版本?Major version upgrades, for example going from Proxmox VE 4.4 to 5.0, are also supported. They must be carefully planned and tested and should never be started without having a current backup ready.
主要版本升级,例如从 Proxmox VE 4.4 升级到 5.0,也是支持的。升级必须经过仔细规划和测试,且绝不应在没有准备好当前备份的情况下开始。Although the specific upgrade steps depend on your respective setup, we provide general instructions and advice of how a upgrade should be performed:
虽然具体的升级步骤取决于您的具体环境,我们提供了一般的指导和建议,说明升级应如何进行: -
LXC vs LXD vs Proxmox Containers vs Docker
LXC vs LXD vs Proxmox 容器 vs DockerLXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system containers. LXC, as well as the former OpenVZ, aims at system virtualization. Thus, it allows you to run a complete OS inside a container, where you log in using ssh, add users, run apache, etc…
LXC 是 Linux 内核隔离功能的用户空间接口。通过强大的 API 和简单的工具,它让 Linux 用户能够轻松创建和管理系统容器。LXC 以及之前的 OpenVZ,目标都是系统虚拟化。因此,它允许你在容器内运行完整的操作系统,你可以通过 ssh 登录,添加用户,运行 apache 等等……LXD is built on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It’s basically an alternative to LXC’s tools and distribution template system with the added features that come from being controllable over the network.
LXD 构建于 LXC 之上,提供了全新的、更好的用户体验。在底层,LXD 通过 liblxc 及其 Go 绑定使用 LXC 来创建和管理容器。它基本上是 LXC 工具和发行版模板系统的替代方案,并且增加了通过网络进行控制的功能。Proxmox Containers are how we refer to containers that are created and managed using the Proxmox Container Toolkit (pct). They also target system virtualization and use LXC as the basis of the container offering. The Proxmox Container Toolkit (pct) is tightly coupled with Proxmox VE. This means that it is aware of cluster setups, and it can use the same network and storage resources as QEMU virtual machines (VMs). You can even use the Proxmox VE firewall, create and restore backups, or manage containers using the HA framework. Everything can be controlled over the network using the Proxmox VE API.
Proxmox 容器是指使用 Proxmox 容器工具包(pct)创建和管理的容器。它们同样面向系统虚拟化,并以 LXC 作为容器的基础。Proxmox 容器工具包(pct)与 Proxmox VE 紧密结合。这意味着它能够识别集群设置,并且可以使用与 QEMU 虚拟机(VM)相同的网络和存储资源。你甚至可以使用 Proxmox VE 防火墙,创建和恢复备份,或使用 HA 框架管理容器。所有操作都可以通过 Proxmox VE API 在网络上进行控制。Docker aims at running a single application in an isolated, self-contained environment. These are generally referred to as “Application Containers”, rather than “System Containers”. You manage a Docker instance from the host, using the Docker Engine command-line interface. It is not recommended to run docker directly on your Proxmox VE host.
Docker 旨在在隔离的、自包含的环境中运行单个应用程序。这些通常被称为“应用容器”,而非“系统容器”。你可以通过主机上的 Docker Engine 命令行界面管理 Docker 实例。不建议直接在你的 Proxmox VE 主机上运行 Docker。If you want to run application containers, for example, Docker images, it is best to run them inside a Proxmox QEMU VM.
如果你想运行应用容器,例如 Docker 镜像,最好将它们运行在 Proxmox QEMU 虚拟机内。
21. Bibliography 21. 参考文献
关于 Proxmox VE 的书籍
-
[Ahmed16] Wasim Ahmed. Mastering Proxmox - Third Edition. Packt Publishing, 2017. ISBN 978-1788397605
[Ahmed16] Wasim Ahmed。《精通 Proxmox - 第三版》。Packt Publishing,2017 年。ISBN 978-1788397605 -
[Ahmed15] Wasim Ahmed. Proxmox Cookbook. Packt Publishing, 2015. ISBN 978-1783980901
[Ahmed15] Wasim Ahmed。《Proxmox 食谱》。Packt Publishing,2015 年。ISBN 978-1783980901 -
[Cheng14] Simon M.C. Cheng. Proxmox High Availability. Packt Publishing, 2014. ISBN 978-1783980888
[Cheng14] Simon M.C. Cheng。《Proxmox 高可用性》。Packt Publishing,2014 年。ISBN 978-1783980888 -
[Goldman16] Rik Goldman. Learning Proxmox VE. Packt Publishing, 2016. ISBN 978-1783981786
[Goldman16] Rik Goldman. 学习 Proxmox VE. Packt Publishing, 2016. ISBN 978-1783981786 -
[Surber16]] Lee R. Surber. Virtualization Complete: Business Basic Edition. Linux Solutions (LRS-TEK), 2016. ASIN B01BBVQZT6
[Surber16] Lee R. Surber. 虚拟化全书:商业基础版. Linux Solutions (LRS-TEK), 2016. ASIN B01BBVQZT6
相关技术书籍
-
[Hertzog13] Raphaël Hertzog, Roland Mas., Freexian SARL The Debian Administrator's Handbook: Debian Bullseye from Discovery to Mastery, Freexian, 2021. ISBN 979-10-91414-20-3
[Hertzog13] Raphaël Hertzog, Roland Mas., Freexian SARL 《Debian 管理员手册:从入门到精通 Debian Bullseye》,Freexian, 2021. ISBN 979-10-91414-20-3 -
[Bir96] Kenneth P. Birman. Building Secure and Reliable Network Applications. Manning Publications Co, 1996. ISBN 978-1884777295
[Bir96] Kenneth P. Birman. 构建安全可靠的网络应用程序。Manning Publications Co,1996 年。ISBN 978-1884777295 -
[Walsh10] Norman Walsh. DocBook 5: The Definitive Guide. O’Reilly & Associates, 2010. ISBN 978-0596805029
[Walsh10] Norman Walsh. DocBook 5:权威指南。O’Reilly & Associates,2010 年。ISBN 978-0596805029 -
[Richardson07] Leonard Richardson & Sam Ruby. RESTful Web Services. O’Reilly Media, 2007. ISBN 978-0596529260
[Richardson07] Leonard Richardson & Sam Ruby. RESTful Web 服务。O’Reilly Media,2007 年。ISBN 978-0596529260 -
[Singh15] Karan Singh. Learning Ceph. Packt Publishing, 2015. ISBN 978-1783985623
[Singh15] Karan Singh. 学习 Ceph。Packt Publishing,2015 年。ISBN 978-1783985623 -
[Singh16] Karan Signh. Ceph Cookbook Packt Publishing, 2016. ISBN 978-1784393502
[Singh16] Karan Singh。《Ceph Cookbook》Packt Publishing,2016 年。ISBN 978-1784393502 -
[Mauerer08] Wolfgang Mauerer. Professional Linux Kernel Architecture. John Wiley & Sons, 2008. ISBN 978-0470343432
[Mauerer08] Wolfgang Mauerer。《Professional Linux Kernel Architecture》。John Wiley & Sons,2008 年。ISBN 978-0470343432 -
[Loshin03] Pete Loshin, IPv6: Theory, Protocol, and Practice, 2nd Edition. Morgan Kaufmann, 2003. ISBN 978-1558608108
[Loshin03] Pete Loshin。《IPv6:理论、协议与实践(第 2 版)》。Morgan Kaufmann,2003 年。ISBN 978-1558608108 -
[Loeliger12] Jon Loeliger & Matthew McCullough. Version Control with Git: Powerful tools and techniques for collaborative software development. O’Reilly and Associates, 2012. ISBN 978-1449316389
[Loeliger12] Jon Loeliger & Matthew McCullough。《使用 Git 进行版本控制:协作软件开发的强大工具与技术》。O’Reilly and Associates,2012 年。ISBN 978-1449316389 -
[Kreibich10] Jay A. Kreibich. Using SQLite, O’Reilly and Associates, 2010. ISBN 978-0596521189
[Kreibich10] Jay A. Kreibich. 使用 SQLite,O’Reilly and Associates,2010 年。ISBN 978-0596521189
相关主题书籍
22. Appendix A: Command-line Interface
22. 附录 A:命令行界面
22.1. General 22.1. 概述
Regarding the historically non-uniform casing style for options, see
the related section for configuration files.
关于选项历史上不统一的大小写风格,请参见配置文件相关章节。
22.2. Output format options [FORMAT_OPTIONS]
22.2. 输出格式选项 [FORMAT_OPTIONS]
It is possible to specify the output format using the
--output-format parameter. The default format text uses ASCII-art
to draw nice borders around tables. It additionally transforms some
values into human-readable text, for example:
可以使用 --output-format 参数指定输出格式。默认的 text 格式使用 ASCII 艺术绘制表格的漂亮边框。此外,它还将某些值转换为人类可读的文本,例如:
-
Unix epoch is displayed as ISO 8601 date string.
Unix 纪元以 ISO 8601 日期字符串显示。 -
Durations are displayed as week/day/hour/minute/second count, i.e 1d 5h.
持续时间以周/天/小时/分钟/秒计数显示,例如 1 天 5 小时。 -
Byte sizes value include units (B, KiB, MiB, GiB, TiB, PiB).
字节大小值包含单位(B、KiB、MiB、GiB、TiB、PiB)。 -
Fractions are display as percentage, i.e. 1.0 is displayed as 100%.
小数以百分比显示,例如 1.0 显示为 100%。
You can also completely suppress output using option --quiet.
你也可以使用选项 --quiet 完全抑制输出。
-
--human-readable <boolean> (default = 1)
--human-readable <boolean>(默认 = 1) -
Call output rendering functions to produce human readable text.
调用输出渲染函数以生成可读的文本。 -
--noborder <boolean> (default = 0)
--noborder <boolean>(默认 = 0) -
Do not draw borders (for text format).
不绘制边框(针对文本格式)。 -
--noheader <boolean> (default = 0)
--noheader <boolean>(默认值 = 0) -
Do not show column headers (for text format).
不显示列标题(针对文本格式)。 -
--output-format <json | json-pretty | text | yaml> (default = text)
--output-format <json | json-pretty | text | yaml>(默认值 = text) -
Output format. 输出格式。
- --quiet <boolean>
-
Suppress printing results.
抑制打印结果。
22.3. pvesm - Proxmox VE Storage Manager
22.3. pvesm - Proxmox VE 存储管理器
pvesm <COMMAND> [ARGS] [OPTIONS]
pvesm <命令> [参数] [选项]
pvesm add <type> <storage> [OPTIONS]
pvesm add <类型> <存储> [选项]
Create a new storage. 创建一个新的存储。
-
<type>: <btrfs | cephfs | cifs | dir | esxi | glusterfs | iscsi | iscsidirect | lvm | lvmthin | nfs | pbs | rbd | zfs | zfspool>
<类型>: <btrfs | cephfs | cifs | dir | esxi | glusterfs | iscsi | iscsidirect | lvm | lvmthin | nfs | pbs | rbd | zfs | zfspool> -
Storage type. 存储类型。
- <storage>: <storage ID> <storage>: <存储 ID>
-
The storage identifier. 存储标识符。
-
--authsupported <string>
--authsupported <字符串> -
Authsupported. 支持认证。
- --base <string>
-
Base volume. This volume is automatically activated.
基础卷。该卷会自动激活。 - --blocksize <string>
-
block size 块大小
-
--bwlimit [clone=<LIMIT>] [,default=<LIMIT>] [,migration=<LIMIT>] [,move=<LIMIT>] [,restore=<LIMIT>]
--bwlimit [clone=<限制>] [,default=<限制>] [,migration=<限制>] [,move=<限制>] [,restore=<限制>] -
Set I/O bandwidth limit for various operations (in KiB/s).
为各种操作设置 I/O 带宽限制(以 KiB/s 为单位)。 - --comstar_hg <string> --comstar_hg <字符串>
-
host group for comstar views
comstar 视图的主机组 - --comstar_tg <string>
-
target group for comstar views
comstar 视图的目标组 - --content <string>
-
Allowed content types. 允许的内容类型。
the value rootdir is used for Containers, and value images for VMs.
值 rootdir 用于容器,值 images 用于虚拟机。 - --content-dirs <string>
-
Overrides for default content type directories.
覆盖默认内容类型目录。 -
--create-base-path <boolean> (default = yes)
--create-base-path <boolean>(默认 = 是) -
Create the base directory if it doesn’t exist.
如果基础目录不存在,则创建该目录。 -
--create-subdirs <boolean> (default = yes)
--create-subdirs <boolean>(默认 = 是) -
Populate the directory with the default structure.
使用默认结构填充该目录。 - --data-pool <string>
-
Data Pool (for erasure coding only)
数据池(仅用于擦除编码) - --datastore <string>
-
Proxmox Backup Server datastore name.
Proxmox 备份服务器数据存储名称。 - --disable <boolean>
-
Flag to disable the storage.
用于禁用存储的标志。 - --domain <string>
-
CIFS domain. CIFS 域。
-
--encryption-key a file containing an encryption key, or the special value "autogen"
--encryption-key 一个包含加密密钥的文件,或特殊值 "autogen" -
Encryption key. Use autogen to generate one automatically without passphrase.
加密密钥。使用 autogen 可自动生成一个无密码短语的密钥。 - --export <string>
-
NFS export path. NFS 导出路径。
- --fingerprint ([A-Fa-f0-9]{2}:){31}[A-Fa-f0-9]{2}
-
Certificate SHA 256 fingerprint.
证书的 SHA 256 指纹。 - --format <qcow2 | raw | subvol | vmdk>
-
Default image format. 默认镜像格式。
- --fs-name <string>
-
The Ceph filesystem name.
Ceph 文件系统名称。 - --fuse <boolean>
-
Mount CephFS through FUSE.
通过 FUSE 挂载 CephFS。 -
--is_mountpoint <string> (default = no)
--is_mountpoint <string>(默认 = no) -
Assume the given path is an externally managed mountpoint and consider the storage offline if it is not mounted. Using a boolean (yes/no) value serves as a shortcut to using the target path in this field.
假设给定路径是外部管理的挂载点,如果未挂载则将存储视为离线。使用布尔值(yes/no)作为此字段中使用目标路径的快捷方式。 - --iscsiprovider <string>
-
iscsi provider iscsi 提供者
-
--keyring file containing the keyring to authenticate in the Ceph cluster
--keyring 文件,包含用于在 Ceph 集群中进行身份验证的密钥环 -
Client keyring contents (for external clusters).
客户端密钥环内容(用于外部集群)。 -
--krbd <boolean> (default = 0)
--krbd <布尔值>(默认 = 0) -
Always access rbd through krbd kernel module.
始终通过 krbd 内核模块访问 rbd。 - --lio_tpg <string> --lio_tpg <字符串>
-
target portal group for Linux LIO targets
Linux LIO 目标的目标门户组 -
--master-pubkey a file containing a PEM-formatted master public key
--master-pubkey 一个包含 PEM 格式主公钥的文件 -
Base64-encoded, PEM-formatted public RSA key. Used to encrypt a copy of the encryption-key which will be added to each encrypted backup.
Base64 编码的 PEM 格式公钥 RSA 密钥。用于加密一份加密密钥的副本,该副本将被添加到每个加密备份中。 -
--max-protected-backups <integer> (-1 - N) (default = Unlimited for users with Datastore.Allocate privilege, 5 for other users)
--max-protected-backups <整数> (-1 - N)(默认值 = 对具有 Datastore.Allocate 权限的用户无限制,其他用户为 5) -
Maximal number of protected backups per guest. Use -1 for unlimited.
每个虚拟机最大受保护备份数量。使用 -1 表示无限制。 -
--maxfiles <integer> (0 - N)
--maxfiles <整数> (0 - N) -
Deprecated: use prune-backups instead. Maximal number of backup files per VM. Use 0 for unlimited.
已弃用:请改用 prune-backups。每个虚拟机最大备份文件数量。使用 0 表示无限制。 -
--mkdir <boolean> (default = yes)
--mkdir <boolean>(默认 = 是) -
Create the directory if it doesn’t exist and populate it with default sub-dirs. NOTE: Deprecated, use the create-base-path and create-subdirs options instead.
如果目录不存在,则创建该目录并填充默认的子目录。注意:已弃用,请改用 create-base-path 和 create-subdirs 选项。 - --monhost <string>
-
IP addresses of monitors (for external clusters).
监视器的 IP 地址(用于外部集群)。 - --mountpoint <string>
-
mount point 挂载点
- --namespace <string>
-
Namespace. 命名空间。
-
--nocow <boolean> (default = 0)
--nocow <boolean>(默认值 = 0) -
Set the NOCOW flag on files. Disables data checksumming and causes data errors to be unrecoverable from while allowing direct I/O. Only use this if data does not need to be any more safe than on a single ext4 formatted disk with no underlying raid system.
在文件上设置 NOCOW 标志。禁用数据校验和,并导致数据错误无法恢复,同时允许直接 I/O。仅当数据不需要比单个使用 ext4 格式且无底层 RAID 系统的磁盘更安全时使用此选项。 - --nodes <string>
-
List of nodes for which the storage configuration applies.
适用存储配置的节点列表。 - --nowritecache <boolean>
-
disable write caching on the target
禁用目标上的写缓存 - --options <string>
-
NFS/CIFS mount options (see man nfs or man mount.cifs)
NFS/CIFS 挂载选项(参见 man nfs 或 man mount.cifs) - --password <password>
-
Password for accessing the share/datastore.
访问共享/数据存储的密码。 - --path <string>
-
File system path. 文件系统路径。
- --pool <string>
-
Pool. 资源池。
-
--port <integer> (1 - 65535)
--port <整数> (1 - 65535) -
Use this port to connect to the storage instead of the default one (for example, with PBS or ESXi). For NFS and CIFS, use the options option to configure the port via the mount options.
使用此端口连接存储,而不是默认端口(例如,使用 PBS 或 ESXi 时)。对于 NFS 和 CIFS,请使用 options 选项通过挂载选项配置端口。 - --portal <string> --portal <字符串>
-
iSCSI portal (IP or DNS name with optional port).
iSCSI 门户(IP 或带可选端口的 DNS 名称)。 -
--preallocation <falloc | full | metadata | off> (default = metadata)
--preallocation <falloc | full | metadata | off>(默认 = metadata) -
Preallocation mode for raw and qcow2 images. Using metadata on raw images results in preallocation=off.
原始和 qcow2 镜像的预分配模式。对原始镜像使用 metadata 时,预分配模式为 off。 - --prune-backups [keep-all=<1|0>] [,keep-daily=<N>] [,keep-hourly=<N>] [,keep-last=<N>] [,keep-monthly=<N>] [,keep-weekly=<N>] [,keep-yearly=<N>]
-
The retention options with shorter intervals are processed first with --keep-last being the very first one. Each option covers a specific period of time. We say that backups within this period are covered by this option. The next option does not take care of already covered backups and only considers older backups.
保留选项按时间间隔从短到长依次处理,其中--keep-last 是第一个处理的。每个选项覆盖特定的时间段。我们说该时间段内的备份由该选项覆盖。下一个选项不会处理已覆盖的备份,只考虑更早的备份。 - --saferemove <boolean>
-
Zero-out data when removing LVs.
移除逻辑卷时清零数据。 - --saferemove_throughput <string>
-
Wipe throughput (cstream -t parameter value).
擦除吞吐量(cstream -t 参数值)。 - --server <string>
-
Server IP or DNS name.
服务器 IP 或 DNS 名称。 - --server2 <string>
-
Backup volfile server IP or DNS name.
备用 volfile 服务器的 IP 或 DNS 名称。Requires option(s): server
需要选项:server - --share <string>
-
CIFS share. CIFS 共享。
- --shared <boolean>
-
Indicate that this is a single storage with the same contents on all nodes (or all listed in the nodes option). It will not make the contents of a local storage automatically accessible to other nodes, it just marks an already shared storage as such!
表示这是一个单一存储,所有节点(或节点选项中列出的所有节点)上的内容相同。它不会自动使本地存储的内容对其他节点可访问,只是将已经共享的存储标记为共享! -
--skip-cert-verification <boolean> (default = false)
--skip-cert-verification <boolean>(默认 = false) -
Disable TLS certificate verification, only enable on fully trusted networks!
禁用 TLS 证书验证,仅在完全信任的网络上启用! -
--smbversion <2.0 | 2.1 | 3 | 3.0 | 3.11 | default> (default = default)
--smbversion <2.0 | 2.1 | 3 | 3.0 | 3.11 | default>(默认 = default) -
SMB protocol version. default if not set, negotiates the highest SMB2+ version supported by both the client and server.
SMB 协议版本。如果未设置,默认协商客户端和服务器都支持的最高 SMB2+ 版本。 - --sparse <boolean>
-
use sparse volumes 使用稀疏卷
- --subdir <string>
-
Subdir to mount. 要挂载的子目录。
- --tagged_only <boolean>
-
Only use logical volumes tagged with pve-vm-ID.
仅使用带有 pve-vm-ID 标签的逻辑卷。 - --target <string>
-
iSCSI target. iSCSI 目标。
- --thinpool <string> --thinpool <字符串>
-
LVM thin pool LV name.
LVM 精简池逻辑卷名称。 - --transport <rdma | tcp | unix>
-
Gluster transport: tcp or rdma
Gluster 传输:tcp 或 rdma - --username <string>
-
RBD Id. RBD ID。
- --vgname <string>
-
Volume group name. 卷组名称。
- --volume <string>
-
Glusterfs Volume. Glusterfs 卷。
pvesm alloc <storage> <vmid> <filename> <size> [OPTIONS]
Allocate disk images. 分配磁盘映像。
- <storage>: <storage ID> <storage>:<存储 ID>
-
The storage identifier. 存储标识符。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
Specify owner VM 指定所属虚拟机
- <filename>: <string> <filename>: <字符串>
-
The name of the file to create.
要创建的文件名。 - <size>: \d+[MG]?
-
Size in kilobyte (1024 bytes). Optional suffixes M (megabyte, 1024K) and G (gigabyte, 1024M)
大小,单位为千字节(1024 字节)。可选后缀 M(兆字节,1024K)和 G(千兆字节,1024M) - --format <qcow2 | raw | subvol | vmdk>
-
Format of the image.
镜像的格式。Requires option(s): size 需要选项:size
pvesm apiinfo
Returns APIVER and APIAGE.
返回 APIVER 和 APIAGE。
pvesm cifsscan
An alias for pvesm scan cifs.
pvesm scan cifs 的别名。
pvesm export <volume> <format> <filename> [OPTIONS]
pvesm export <volume> <format> <filename> [选项]
Used internally to export a volume.
内部使用,用于导出卷。
- <volume>: <string> <volume>: <字符串>
-
Volume identifier 卷标识符
-
<format>: <btrfs | qcow2+size | raw+size | tar+size | vmdk+size | zfs>
<format>: <btrfs | qcow2+大小 | raw+大小 | tar+大小 | vmdk+大小 | zfs> -
Export stream format 导出流格式
- <filename>: <string> <filename>: <字符串>
-
Destination file name 目标文件名
- --base (?^i:[a-z0-9_\-]{1,40})
-
Snapshot to start an incremental stream from
用于开始增量流的快照 - --snapshot (?^i:[a-z0-9_\-]{1,40})
-
Snapshot to export 要导出的快照
- --snapshot-list <string>
-
Ordered list of snapshots to transfer
要传输的快照有序列表 -
--with-snapshots <boolean> (default = 0)
--with-snapshots <boolean> (默认 = 0) -
Whether to include intermediate snapshots in the stream
是否在流中包含中间快照
pvesm extractconfig <volume>
Extract configuration from vzdump backup archive.
从 vzdump 备份归档中提取配置。
- <volume>: <string>
-
Volume identifier 卷标识符
pvesm free <volume> [OPTIONS]
pvesm free <volume> [选项]
Delete volume 删除卷
- <volume>: <string> <volume>: <字符串>
-
Volume identifier 卷标识符
-
--delay <integer> (1 - 30)
--delay <整数> (1 - 30) -
Time to wait for the task to finish. We return null if the task finish within that time.
等待任务完成的时间。如果任务在该时间内完成,我们返回 null。 - --storage <storage ID> --storage <存储 ID>
-
The storage identifier. 存储标识符。
pvesm glusterfsscan
An alias for pvesm scan glusterfs.
pvesm scan glusterfs 的别名。
pvesm help [OPTIONS]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pvesm import <volume> <format> <filename> [OPTIONS]
pvesm import <volume> <format> <filename> [选项]
Used internally to import a volume.
内部使用,用于导入卷。
- <volume>: <string> <volume>: <字符串>
-
Volume identifier 卷标识符
-
<format>: <btrfs | qcow2+size | raw+size | tar+size | vmdk+size | zfs>
<format>: <btrfs | qcow2+大小 | raw+大小 | tar+大小 | vmdk+大小 | zfs> -
Import stream format 导入流格式
- <filename>: <string> <filename>: <字符串>
-
Source file name. For - stdin is used, the tcp://<IP-or-CIDR> format allows to use a TCP connection, the unix://PATH-TO-SOCKET format a UNIX socket as input.Else, the file is treated as common file.
源文件名。对于 - 使用 stdin,tcp://<IP 或 CIDR> 格式允许使用 TCP 连接,unix://路径-到-套接字 格式使用 UNIX 套接字作为输入。否则,文件被视为普通文件。 -
--allow-rename <boolean> (default = 0)
--allow-rename <boolean>(默认 = 0) -
Choose a new volume ID if the requested volume ID already exists, instead of throwing an error.
如果请求的卷 ID 已存在,则选择一个新的卷 ID,而不是抛出错误。 - --base (?^i:[a-z0-9_\-]{1,40})
-
Base snapshot of an incremental stream
增量流的基础快照 - --delete-snapshot (?^i:[a-z0-9_\-]{1,80})
-
A snapshot to delete on success
成功时要删除的快照 - --snapshot (?^i:[a-z0-9_\-]{1,40})
-
The current-state snapshot if the stream contains snapshots
如果流中包含快照,则为当前状态快照 -
--with-snapshots <boolean> (default = 0)
--with-snapshots <boolean>(默认 = 0) -
Whether the stream includes intermediate snapshots
流是否包含中间快照
pvesm iscsiscan
An alias for pvesm scan iscsi.
pvesm scan iscsi 的别名。
pvesm list <storage> [OPTIONS]
pvesm list <storage> [选项]
List storage content. 列出存储内容。
- <storage>: <storage ID> <storage>:<存储 ID>
-
The storage identifier. 存储标识符。
- --content <string>
-
Only list content of this type.
仅列出此类型的内容。 - --vmid <integer> (100 - 999999999)
-
Only list images for this VM
仅列出此虚拟机的镜像。
pvesm lvmscan
An alias for pvesm scan lvm.
pvesm scan lvm 的别名。
pvesm lvmthinscan
An alias for pvesm scan lvmthin.
pvesm scan lvmthin 的别名。
pvesm nfsscan
An alias for pvesm scan nfs.
pvesm scan nfs 的别名。
pvesm path <volume>
Get filesystem path for specified volume
获取指定存储卷的文件系统路径。
- <volume>: <string>
-
Volume identifier 卷标识符
pvesm prune-backups <storage> [OPTIONS]
pvesm prune-backups <storage> [选项]
Prune backups. Only those using the standard naming scheme are considered.
If no keep options are specified, those from the storage configuration are
used.
修剪备份。仅考虑使用标准命名方案的备份。如果未指定保留选项,则使用存储配置中的选项。
- <storage>: <storage ID>
-
The storage identifier. 存储标识符。
- --dry-run <boolean>
-
Only show what would be pruned, don’t delete anything.
仅显示将被清理的内容,不执行删除操作。 - --keep-all <boolean>
-
Keep all backups. Conflicts with the other options when true.
保留所有备份。设置为 true 时,与其他选项冲突。 - --keep-daily <N>
-
Keep backups for the last <N> different days. If there is morethan one backup for a single day, only the latest one is kept.
保留最近<N>个不同日期的备份。如果某一天有多个备份,则只保留最新的一个。 - --keep-hourly <N>
-
Keep backups for the last <N> different hours. If there is morethan one backup for a single hour, only the latest one is kept.
保留最近 <N> 个不同小时的备份。如果某一小时内有多个备份,则只保留最新的一个。 - --keep-last <N>
-
Keep the last <N> backups.
保留最近的 <N> 个备份。 - --keep-monthly <N>
-
Keep backups for the last <N> different months. If there is morethan one backup for a single month, only the latest one is kept.
保留最近 <N> 个不同月份的备份。如果某个月有多个备份,则只保留最新的一个。 - --keep-weekly <N>
-
Keep backups for the last <N> different weeks. If there is morethan one backup for a single week, only the latest one is kept.
保留最近 <N> 个不同周的备份。如果某周有多个备份,则只保留最新的一个。 - --keep-yearly <N>
-
Keep backups for the last <N> different years. If there is morethan one backup for a single year, only the latest one is kept.
保留最近 <N> 个不同年份的备份。如果某一年有多个备份,则只保留最新的一个。 - --type <lxc | qemu>
-
Either qemu or lxc. Only consider backups for guests of this type.
选择 qemu 或 lxc。仅考虑该类型虚拟机的备份。 -
--vmid <integer> (100 - 999999999)
--vmid <整数> (100 - 999999999) -
Only consider backups for this guest.
仅考虑此虚拟机的备份。
pvesm remove <storage> pvesm remove <存储>
Delete storage configuration.
删除存储配置。
- <storage>: <storage ID> <storage>:<storage ID>
-
The storage identifier. 存储标识符。
pvesm scan cifs <server> [OPTIONS]
pvesm scan cifs <server> [选项]
Scan remote CIFS server. 扫描远程 CIFS 服务器。
- <server>: <string>
-
The server address (name or IP).
服务器地址(名称或 IP)。 - --domain <string>
-
SMB domain (Workgroup). SMB 域(工作组)。
- --password <password>
-
User password. 用户密码。
- --username <string>
-
User name. 用户名。
pvesm scan glusterfs <server>
Scan remote GlusterFS server.
扫描远程 GlusterFS 服务器。
- <server>: <string>
-
The server address (name or IP).
服务器地址(名称或 IP)。
pvesm scan iscsi <portal>
Scan remote iSCSI server.
扫描远程 iSCSI 服务器。
- <portal>: <string>
-
The iSCSI portal (IP or DNS name with optional port).
iSCSI 入口(IP 或带可选端口的 DNS 名称)。
pvesm scan lvm
List local LVM volume groups.
列出本地 LVM 卷组。
pvesm scan lvmthin <vg>
List local LVM Thin Pools.
列出本地 LVM Thin 池。
- <vg>: [a-zA-Z0-9\.\+\_][a-zA-Z0-9\.\+\_\-]+
-
no description available
无可用描述
pvesm scan nfs <server>
Scan remote NFS server. 扫描远程 NFS 服务器。
- <server>: <string>
-
The server address (name or IP).
服务器地址(名称或 IP)。
pvesm scan pbs <server> <username> --password <string> [OPTIONS] [FORMAT_OPTIONS]
Scan remote Proxmox Backup Server.
扫描远程 Proxmox 备份服务器。
- <server>: <string>
-
The server address (name or IP).
服务器地址(名称或 IP)。 - <username>: <string>
-
User-name or API token-ID.
用户名或 API 代币 ID。 - --fingerprint ([A-Fa-f0-9]{2}:){31}[A-Fa-f0-9]{2}
-
Certificate SHA 256 fingerprint.
证书的 SHA 256 指纹。 - --password <string>
-
User password or API token secret.
用户密码或 API 代币密钥。 -
--port <integer> (1 - 65535) (default = 8007)
--port <整数> (1 - 65535) (默认 = 8007) -
Optional port. 可选端口。
pvesm scan zfs
Scan zfs pool list on local node.
扫描本地节点上的 zfs 池列表。
pvesm set <storage> [OPTIONS]
pvesm set <storage> [选项]
Update storage configuration.
更新存储配置。
- <storage>: <storage ID> <storage>:<存储 ID>
-
The storage identifier. 存储标识符。
- --blocksize <string>
-
block size 块大小
- --bwlimit [clone=<LIMIT>] [,default=<LIMIT>] [,migration=<LIMIT>] [,move=<LIMIT>] [,restore=<LIMIT>]
-
Set I/O bandwidth limit for various operations (in KiB/s).
设置各种操作的 I/O 带宽限制(以 KiB/s 为单位)。 - --comstar_hg <string>
-
host group for comstar views
comstar 视图的主机组 - --comstar_tg <string>
-
target group for comstar views
comstar 视图的目标组 - --content <string>
-
Allowed content types. 允许的内容类型。
the value rootdir is used for Containers, and value images for VMs.
值 rootdir 用于容器,值 images 用于虚拟机。 - --content-dirs <string>
-
Overrides for default content type directories.
覆盖默认内容类型目录。 -
--create-base-path <boolean> (default = yes)
--create-base-path <boolean>(默认 = 是) -
Create the base directory if it doesn’t exist.
如果基础目录不存在,则创建该目录。 -
--create-subdirs <boolean> (default = yes)
--create-subdirs <boolean>(默认 = 是) -
Populate the directory with the default structure.
用默认结构填充目录。 - --data-pool <string>
-
Data Pool (for erasure coding only)
数据池(仅用于纠删码) - --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --digest <string>
-
Prevent changes if current configuration file has a different digest. This can be used to prevent concurrent modifications.
如果当前配置文件的摘要不同,则阻止更改。此功能可用于防止并发修改。 - --disable <boolean>
-
Flag to disable the storage.
用于禁用存储的标志。 - --domain <string>
-
CIFS domain. CIFS 域。
-
--encryption-key a file containing an encryption key, or the special value "autogen"
--encryption-key 包含加密密钥的文件,或特殊值 "autogen" -
Encryption key. Use autogen to generate one automatically without passphrase.
加密密钥。使用 autogen 自动生成一个无密码短语的密钥。 - --fingerprint ([A-Fa-f0-9]{2}:){31}[A-Fa-f0-9]{2}
-
Certificate SHA 256 fingerprint.
证书 SHA 256 指纹。 - --format <qcow2 | raw | subvol | vmdk>
-
Default image format. 默认镜像格式。
- --fs-name <string>
-
The Ceph filesystem name.
Ceph 文件系统名称。 - --fuse <boolean>
-
Mount CephFS through FUSE.
通过 FUSE 挂载 CephFS。 -
--is_mountpoint <string> (default = no)
--is_mountpoint <string>(默认 = no) -
Assume the given path is an externally managed mountpoint and consider the storage offline if it is not mounted. Using a boolean (yes/no) value serves as a shortcut to using the target path in this field.
假设给定路径是外部管理的挂载点,如果未挂载则将存储视为离线。使用布尔值(yes/no)作为此字段中使用目标路径的快捷方式。 -
--keyring file containing the keyring to authenticate in the Ceph cluster
--keyring 包含用于在 Ceph 集群中认证的 keyring 文件 -
Client keyring contents (for external clusters).
客户端密钥环内容(用于外部集群)。 -
--krbd <boolean> (default = 0)
--krbd <布尔值>(默认 = 0) -
Always access rbd through krbd kernel module.
始终通过 krbd 内核模块访问 rbd。 - --lio_tpg <string> --lio_tpg <字符串>
-
target portal group for Linux LIO targets
Linux LIO 目标的目标门户组 -
--master-pubkey a file containing a PEM-formatted master public key
--master-pubkey 一个包含 PEM 格式主公钥的文件 -
Base64-encoded, PEM-formatted public RSA key. Used to encrypt a copy of the encryption-key which will be added to each encrypted backup.
Base64 编码的、PEM 格式的公 RSA 密钥。用于加密加密密钥的副本,该副本将添加到每个加密备份中。 -
--max-protected-backups <integer> (-1 - N) (default = Unlimited for users with Datastore.Allocate privilege, 5 for other users)
--max-protected-backups <整数> (-1 - N)(默认值 = 对具有 Datastore.Allocate 权限的用户无限制,对其他用户为 5) -
Maximal number of protected backups per guest. Use -1 for unlimited.
每个虚拟机最大保护备份数量。使用 -1 表示无限制。 -
--maxfiles <integer> (0 - N)
--maxfiles <整数> (0 - N) -
Deprecated: use prune-backups instead. Maximal number of backup files per VM. Use 0 for unlimited.
已弃用:请改用 prune-backups。每个虚拟机最大备份文件数量。使用 0 表示无限制。 -
--mkdir <boolean> (default = yes)
--mkdir <布尔值>(默认 = 是) -
Create the directory if it doesn’t exist and populate it with default sub-dirs. NOTE: Deprecated, use the create-base-path and create-subdirs options instead.
如果目录不存在,则创建该目录并用默认子目录填充。注意:已弃用,请改用 create-base-path 和 create-subdirs 选项。 - --monhost <string>
-
IP addresses of monitors (for external clusters).
监视器的 IP 地址(用于外部集群)。 - --mountpoint <string>
-
mount point 挂载点
- --namespace <string>
-
Namespace. 命名空间。
-
--nocow <boolean> (default = 0)
--nocow <boolean>(默认 = 0) -
Set the NOCOW flag on files. Disables data checksumming and causes data errors to be unrecoverable from while allowing direct I/O. Only use this if data does not need to be any more safe than on a single ext4 formatted disk with no underlying raid system.
在文件上设置 NOCOW 标志。禁用数据校验和,并导致数据错误无法恢复,同时允许直接 I/O。仅当数据不需要比单个使用 ext4 格式且无底层 raid 系统的磁盘更安全时使用此选项。 - --nodes <string>
-
List of nodes for which the storage configuration applies.
适用于存储配置的节点列表。 - --nowritecache <boolean>
-
disable write caching on the target
禁用目标上的写缓存 - --options <string>
-
NFS/CIFS mount options (see man nfs or man mount.cifs)
NFS/CIFS 挂载选项(参见 man nfs 或 man mount.cifs) - --password <password>
-
Password for accessing the share/datastore.
访问共享/数据存储的密码。 - --pool <string>
-
Pool. 池。
- --port <integer> (1 - 65535)
-
Use this port to connect to the storage instead of the default one (for example, with PBS or ESXi). For NFS and CIFS, use the options option to configure the port via the mount options.
使用此端口连接存储,而不是默认端口(例如,使用 PBS 或 ESXi 时)。对于 NFS 和 CIFS,通过挂载选项中的 options 选项配置端口。 -
--preallocation <falloc | full | metadata | off> (default = metadata)
--preallocation <falloc | full | metadata | off>(默认 = metadata) -
Preallocation mode for raw and qcow2 images. Using metadata on raw images results in preallocation=off.
用于 raw 和 qcow2 镜像的预分配模式。对 raw 镜像使用 metadata 时,预分配等同于 off。 - --prune-backups [keep-all=<1|0>] [,keep-daily=<N>] [,keep-hourly=<N>] [,keep-last=<N>] [,keep-monthly=<N>] [,keep-weekly=<N>] [,keep-yearly=<N>]
-
The retention options with shorter intervals are processed first with --keep-last being the very first one. Each option covers a specific period of time. We say that backups within this period are covered by this option. The next option does not take care of already covered backups and only considers older backups.
保留选项中间隔较短的会先被处理,其中 --keep-last 是第一个处理的选项。每个选项覆盖特定的时间段。我们说该时间段内的备份被该选项覆盖。下一个选项不会处理已被覆盖的备份,只会考虑更早的备份。 - --saferemove <boolean>
-
Zero-out data when removing LVs.
移除逻辑卷时将数据清零。 - --saferemove_throughput <string>
-
Wipe throughput (cstream -t parameter value).
擦除吞吐量(cstream -t 参数值)。 - --server <string>
-
Server IP or DNS name.
服务器 IP 或 DNS 名称。 - --server2 <string>
-
Backup volfile server IP or DNS name.
备份 volfile 服务器的 IP 或 DNS 名称。Requires option(s): server
需要选项:server - --shared <boolean>
-
Indicate that this is a single storage with the same contents on all nodes (or all listed in the nodes option). It will not make the contents of a local storage automatically accessible to other nodes, it just marks an already shared storage as such!
表示这是一个在所有节点(或所有在 nodes 选项中列出的节点)上内容相同的单一存储。它不会自动使本地存储的内容对其他节点可访问,只是将已共享的存储标记为共享! -
--skip-cert-verification <boolean> (default = false)
--skip-cert-verification <boolean>(默认 = false) -
Disable TLS certificate verification, only enable on fully trusted networks!
禁用 TLS 证书验证,仅在完全信任的网络上启用! -
--smbversion <2.0 | 2.1 | 3 | 3.0 | 3.11 | default> (default = default)
--smbversion <2.0 | 2.1 | 3 | 3.0 | 3.11 | default>(默认 = default) -
SMB protocol version. default if not set, negotiates the highest SMB2+ version supported by both the client and server.
SMB 协议版本。未设置时为默认,协商客户端和服务器都支持的最高 SMB2+ 版本。 - --sparse <boolean>
-
use sparse volumes 使用稀疏卷
- --subdir <string>
-
Subdir to mount. 要挂载的子目录。
- --tagged_only <boolean>
-
Only use logical volumes tagged with pve-vm-ID.
仅使用带有 pve-vm-ID 标签的逻辑卷。 - --transport <rdma | tcp | unix>
-
Gluster transport: tcp or rdma
Gluster 传输方式:tcp 或 rdma - --username <string>
-
RBD Id. RBD 标识。
pvesm status [OPTIONS] pvesm status [选项]
Get status for all datastores.
获取所有数据存储的状态。
- --content <string>
-
Only list stores which support this content type.
仅列出支持此内容类型的存储。 -
--enabled <boolean> (default = 0)
--enabled <boolean>(默认值 = 0) -
Only list stores which are enabled (not disabled in config).
仅列出已启用的存储(配置中未禁用)。 -
--format <boolean> (default = 0)
--format <boolean>(默认 = 0) -
Include information about formats
包含有关格式的信息 - --storage <storage ID> --storage <存储 ID>
-
Only list status for specified storage
仅列出指定存储的状态 - --target <string>
-
If target is different to node, we only lists shared storages which content is accessible on this node and the specified target node.
如果目标与节点不同,我们只列出内容在该节点和指定目标节点上都可访问的共享存储。
pvesm zfsscan
An alias for pvesm scan zfs.
pvesm scan zfs 的别名。
22.4. pvesubscription - Proxmox VE Subscription Manager
22.4. pvesubscription - Proxmox VE 订阅管理器
pvesubscription <COMMAND> [ARGS] [OPTIONS]
pvesubscription <命令> [参数] [选项]
pvesubscription delete
Delete subscription key of this node.
删除此节点的订阅密钥。
pvesubscription get
Read subscription info. 读取订阅信息。
pvesubscription help [OPTIONS]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pvesubscription set <key>
Set subscription key. 设置订阅密钥。
- <key>: \s*pve([1248])([cbsp])-[0-9a-f]{10}\s*
-
Proxmox VE subscription key
Proxmox VE 订阅密钥
pvesubscription set-offline-key <data>
Internal use only! To set an offline key, use the package
proxmox-offline-mirror-helper instead.
仅限内部使用!要设置离线密钥,请使用包 proxmox-offline-mirror-helper。
- <data>: <string> <data>: <字符串>
-
A signed subscription info blob
一个已签名的订阅信息数据块
pvesubscription update [OPTIONS]
pvesubscription update [选项]
Update subscription info.
更新订阅信息。
-
--force <boolean> (default = 0)
--force <布尔值>(默认 = 0) -
Always connect to server, even if local cache is still valid.
始终连接到服务器,即使本地缓存仍然有效。
22.5. pveperf - Proxmox VE Benchmark Script
22.5. pveperf - Proxmox VE 基准测试脚本
pveperf [PATH] pveperf [路径]
22.6. pveceph - Manage CEPH Services on Proxmox VE Nodes
22.6. pveceph - 管理 Proxmox VE 节点上的 CEPH 服务
pveceph <COMMAND> [ARGS] [OPTIONS]
pveceph <命令> [参数] [选项]
pveceph createmgr
An alias for pveceph mgr create.
pveceph mgr create 的别名。
pveceph createmon
An alias for pveceph mon create.
pveceph mon create 的别名。
pveceph createosd
An alias for pveceph osd create.
pveceph osd create 的别名。
pveceph createpool
An alias for pveceph pool create.
pveceph pool create 的别名。
pveceph destroymgr
An alias for pveceph mgr destroy.
pveceph mgr destroy 的别名。
pveceph destroymon
An alias for pveceph mon destroy.
pveceph mon destroy 的别名。
pveceph destroyosd
An alias for pveceph osd destroy.
pveceph osd destroy 的别名。
pveceph destroypool
An alias for pveceph pool destroy.
pveceph pool destroy 的别名。
pveceph fs create [OPTIONS]
pveceph fs create [选项]
Create a Ceph filesystem 创建一个 Ceph 文件系统
-
--add-storage <boolean> (default = 0)
--add-storage <布尔值>(默认 = 0) -
Configure the created CephFS as storage for this cluster.
将创建的 CephFS 配置为此集群的存储。 -
--name (?^:^[^:/\s]+$) (default = cephfs)
--name (?^:^[^:/\s]+$)(默认 = cephfs) -
The ceph filesystem name.
Ceph 文件系统名称。 -
--pg_num <integer> (8 - 32768) (default = 128)
--pg_num <整数>(8 - 32768)(默认 = 128) -
Number of placement groups for the backing data pool. The metadata pool will use a quarter of this.
后端数据池的放置组数量。元数据池将使用其中的四分之一。
pveceph fs destroy <name> [OPTIONS]
pveceph fs destroy <name> [选项]
Destroy a Ceph filesystem
销毁一个 Ceph 文件系统
- <name>: <string> <name>: <字符串>
-
The ceph filesystem name.
Ceph 文件系统的名称。 -
--remove-pools <boolean> (default = 0)
--remove-pools <boolean>(默认 = 0) -
Remove data and metadata pools configured for this fs.
移除此文件系统配置的数据和元数据池。 -
--remove-storages <boolean> (default = 0)
--remove-storages <boolean>(默认 = 0) -
Remove all pveceph-managed storages configured for this fs.
移除此文件系统配置的所有由 pveceph 管理的存储。
pveceph help [OPTIONS] pveceph help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pveceph init [OPTIONS] pveceph init [选项]
Create initial ceph default configuration and setup symlinks.
创建初始的 ceph 默认配置并设置符号链接。
- --cluster-network <string>
-
Declare a separate cluster network, OSDs will routeheartbeat, object replication and recovery traffic over it
声明一个独立的集群网络,OSD 将通过该网络路由心跳、对象复制和恢复流量Requires option(s): network
需要选项:network -
--disable_cephx <boolean> (default = 0)
--disable_cephx <boolean>(默认 = 0) -
Disable cephx authentication.
禁用 cephx 认证。cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
cephx 是一种防止中间人攻击的安全功能。只有在您的网络是私有的情况下才考虑禁用 cephx! -
--min_size <integer> (1 - 7) (default = 2)
--min_size <整数> (1 - 7) (默认 = 2) -
Minimum number of available replicas per object to allow I/O
允许 I/O 的每个对象的最小可用副本数 - --network <string>
-
Use specific network for all ceph related traffic
为所有 Ceph 相关流量使用指定网络 -
--pg_bits <integer> (6 - 14) (default = 6)
--pg_bits <integer> (6 - 14) (默认 = 6) -
Placement group bits, used to specify the default number of placement groups.
放置组位数,用于指定默认的放置组数量。Depreacted. This setting was deprecated in recent Ceph versions.
已弃用。此设置在最近的 Ceph 版本中已被弃用。 -
--size <integer> (1 - 7) (default = 3)
--size <整数> (1 - 7)(默认 = 3) -
Targeted number of replicas per object
每个对象的目标副本数量
pveceph install [OPTIONS]
pveceph install [选项]
Install ceph related packages.
安装 ceph 相关包。
-
--allow-experimental <boolean> (default = 0)
--allow-experimental <boolean>(默认 = 0) -
Allow experimental versions. Use with care!
允许使用实验版本。请谨慎使用! -
--repository <enterprise | no-subscription | test> (default = enterprise)
--repository <enterprise | no-subscription | test>(默认 = enterprise) -
Ceph repository to use.
要使用的 Ceph 代码仓库。 -
--version <quincy | reef | squid> (default = quincy)
--version <quincy | reef | squid>(默认 = quincy) -
Ceph version to install.
要安装的 Ceph 版本。
pveceph lspools
An alias for pveceph pool ls.
pveceph pool ls 的别名。
pveceph mds create [OPTIONS]
pveceph mds create [选项]
Create Ceph Metadata Server (MDS)
创建 Ceph 元数据服务器(MDS)
-
--hotstandby <boolean> (default = 0)
--hotstandby <布尔值>(默认 = 0) -
Determines whether a ceph-mds daemon should poll and replay the log of an active MDS. Faster switch on MDS failure, but needs more idle resources.
确定 ceph-mds 守护进程是否应轮询并重放活动 MDS 的日志。MDS 失败时切换更快,但需要更多空闲资源。 -
--name [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])? (default = nodename)
--name [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?(默认 = 节点名) -
The ID for the mds, when omitted the same as the nodename
mds 的 ID,省略时与节点名相同
pveceph mds destroy <name>
Destroy Ceph Metadata Server
销毁 Ceph 元数据服务器
- <name>: [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?
-
The name (ID) of the mds
mds 的名称(ID)
pveceph mgr create [OPTIONS]
pveceph mgr create [选项]
Create Ceph Manager 创建 Ceph 管理器
- --id [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?
-
The ID for the manager, when omitted the same as the nodename
管理器的 ID,若省略则与节点名相同
pveceph mgr destroy <id>
Destroy Ceph Manager. 销毁 Ceph 管理器。
- <id>: [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?
-
The ID of the manager
管理器的 ID
pveceph mon create [OPTIONS]
pveceph mon create [选项]
Create Ceph Monitor and Manager
创建 Ceph 监视器和管理器
- --mon-address <string>
-
Overwrites autodetected monitor IP address(es). Must be in the public network(s) of Ceph.
覆盖自动检测的监视器 IP 地址。必须位于 Ceph 的公共网络中。 - --monid [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?
-
The ID for the monitor, when omitted the same as the nodename
监视器的 ID,省略时与节点名相同
pveceph mon destroy <monid>
Destroy Ceph Monitor and Manager.
销毁 Ceph 监视器和管理器。
-
<monid>: [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?
<monid>:[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])? -
Monitor ID 监视器 ID
pveceph osd create <dev> [OPTIONS]
pveceph osd create <dev> [选项]
Create OSD 创建 OSD
- <dev>: <string> <dev>: <字符串>
-
Block device name. 块设备名称。
- --crush-device-class <string>
-
Set the device class of the OSD in crush.
设置 crush 中 OSD 的设备类别。 - --db_dev <string>
-
Block device name for block.db.
block.db 的块设备名称。 -
--db_dev_size <number> (1 - N) (default = bluestore_block_db_size or 10% of OSD size)
--db_dev_size <数字>(1 - N)(默认 = bluestore_block_db_size 或 OSD 大小的 10%) -
Size in GiB for block.db.
block.db 的大小,单位为 GiB。Requires option(s): db_dev
需要选项:db_dev -
--encrypted <boolean> (default = 0)
--encrypted <boolean>(默认值 = 0) -
Enables encryption of the OSD.
启用 OSD 的加密。 -
--osds-per-device <integer> (1 - N)
--osds-per-device <整数>(1 - N) -
OSD services per physical device. Only useful for fast NVMe devices" ." to utilize their performance better.
每个物理设备的 OSD 服务数量。仅适用于快速的 NVMe 设备,以更好地利用其性能。 - --wal_dev <string> --wal_dev <字符串>
-
Block device name for block.wal.
block.wal 的块设备名称。 -
--wal_dev_size <number> (0.5 - N) (default = bluestore_block_wal_size or 1% of OSD size)
--wal_dev_size <数字> (0.5 - N) (默认 = bluestore_block_wal_size 或 OSD 大小的 1%) -
Size in GiB for block.wal.
block.wal 的大小,单位为 GiB。Requires option(s): wal_dev
需要选项:wal_dev
pveceph osd destroy <osdid> [OPTIONS]
pveceph osd destroy <osdid> [选项]
Destroy OSD 销毁 OSD
- <osdid>: <integer> <osdid>:<整数>
-
OSD ID
-
--cleanup <boolean> (default = 0)
--cleanup <boolean>(默认 = 0) -
If set, we remove partition table entries.
如果设置,将移除分区表条目。
pveceph osd details <osdid> [OPTIONS] [FORMAT_OPTIONS]
pveceph osd details <osdid> [选项] [格式选项]
Get OSD details. 获取 OSD 详情。
- <osdid>: <string>
-
ID of the OSD
OSD 的 ID -
--verbose <boolean> (default = 0)
--verbose <boolean>(默认 = 0) -
Print verbose information, same as json-pretty output format.
打印详细信息,与 json-pretty 输出格式相同。
pveceph pool create <name> [OPTIONS]
pveceph pool create <name> [选项]
Create Ceph pool 创建 Ceph 池
- <name>: (?^:^[^:/\s]+$)
-
The name of the pool. It must be unique.
池的名称。必须唯一。 -
--add_storages <boolean> (default = 0; for erasure coded pools: 1)
--add_storages <布尔值>(默认 = 0;对于纠删码池:1) -
Configure VM and CT storage using the new pool.
使用新池配置虚拟机和容器存储。 -
--application <cephfs | rbd | rgw> (default = rbd)
--application <cephfs | rbd | rgw>(默认 = rbd) -
The application of the pool.
池的应用。 - --crush_rule <string>
-
The rule to use for mapping object placement in the cluster.
用于在集群中映射对象放置的规则。 -
--erasure-coding k=<integer> ,m=<integer> [,device-class=<class>] [,failure-domain=<domain>] [,profile=<profile>]
--erasure-coding k=<整数> ,m=<整数> [,device-class=<类>] [,failure-domain=<域>] [,profile=<配置文件>] -
Create an erasure coded pool for RBD with an accompaning replicated pool for metadata storage. With EC, the common ceph options size, min_size and crush_rule parameters will be applied to the metadata pool.
为 RBD 创建一个纠删码池,并配套一个用于元数据存储的副本池。使用纠删码时,常见的 ceph 选项 size、min_size 和 crush_rule 参数将应用于元数据池。 -
--min_size <integer> (1 - 7) (default = 2)
--min_size <整数> (1 - 7)(默认 = 2) -
Minimum number of replicas per object
每个对象的最小副本数 -
--pg_autoscale_mode <off | on | warn> (default = warn)
--pg_autoscale_mode <关闭 | 开启 | 警告>(默认 = 警告) -
The automatic PG scaling mode of the pool.
池的自动 PG 缩放模式。 -
--pg_num <integer> (1 - 32768) (default = 128)
--pg_num <整数> (1 - 32768) (默认 = 128) -
Number of placement groups.
放置组的数量。 -
--pg_num_min <integer> (-N - 32768)
--pg_num_min <整数> (-N - 32768) -
Minimal number of placement groups.
最小的放置组数量。 -
--size <integer> (1 - 7) (default = 3)
--size <整数> (1 - 7)(默认 = 3) -
Number of replicas per object
每个对象的副本数量 - --target_size ^(\d+(\.\d+)?)([KMGT])?$
-
The estimated target size of the pool for the PG autoscaler.
PG 自动扩展器的池估计目标大小。 -
--target_size_ratio <number>
--target_size_ratio <数字> -
The estimated target ratio of the pool for the PG autoscaler.
PG 自动扩展器的池估计目标比例。
pveceph pool destroy <name> [OPTIONS]
pveceph pool destroy <name> [选项]
Destroy pool 销毁资源池
- <name>: <string>
-
The name of the pool. It must be unique.
资源池的名称。必须唯一。 -
--force <boolean> (default = 0)
--force <boolean>(默认 = 0) -
If true, destroys pool even if in use
如果为真,即使正在使用也销毁池 -
--remove_ecprofile <boolean> (default = 1)
--remove_ecprofile <boolean>(默认 = 1) -
Remove the erasure code profile. Defaults to true, if applicable.
移除纠删码配置文件。默认值为真(如果适用)。 -
--remove_storages <boolean> (default = 0)
--remove_storages <boolean>(默认 = 0) -
Remove all pveceph-managed storages configured for this pool
删除为此池配置的所有 pveceph 管理的存储
pveceph pool get <name> [OPTIONS] [FORMAT_OPTIONS]
pveceph pool get <name> [选项] [格式选项]
Show the current pool status.
显示当前池状态。
- <name>: <string> <name>: <字符串>
-
The name of the pool. It must be unique.
池的名称。必须唯一。 -
--verbose <boolean> (default = 0)
--verbose <boolean>(默认 = 0) -
If enabled, will display additional data(eg. statistics).
如果启用,将显示额外的数据(例如统计信息)。
pveceph pool ls [FORMAT_OPTIONS]
List all pools and their settings (which are settable by the POST/PUT
endpoints).
列出所有存储池及其设置(可通过 POST/PUT 端点进行设置)。
pveceph pool set <name> [OPTIONS]
pveceph pool set <name> [选项]
Change POOL settings 更改存储池设置
- <name>: (?^:^[^:/\s]+$) <name>:(?^:^[^:/\s]+$)
-
The name of the pool. It must be unique.
池的名称。必须唯一。 - --application <cephfs | rbd | rgw>
-
The application of the pool.
池的应用。 - --crush_rule <string>
-
The rule to use for mapping object placement in the cluster.
用于映射集群中对象放置的规则。 -
--min_size <integer> (1 - 7)
--min_size <整数> (1 - 7) -
Minimum number of replicas per object
每个对象的最小副本数 -
--pg_autoscale_mode <off | on | warn>
--pg_autoscale_mode <关闭 | 开启 | 警告> -
The automatic PG scaling mode of the pool.
池的自动 PG 缩放模式。 -
--pg_num <integer> (1 - 32768)
--pg_num <整数> (1 - 32768) -
Number of placement groups.
放置组的数量。 -
--pg_num_min <integer> (-N - 32768)
--pg_num_min <整数> (-N - 32768) -
Minimal number of placement groups.
最小的放置组数量。 -
--size <integer> (1 - 7)
--size <整数> (1 - 7) -
Number of replicas per object
每个对象的副本数量 - --target_size ^(\d+(\.\d+)?)([KMGT])?$
-
The estimated target size of the pool for the PG autoscaler.
PG 自动扩展器的池估计目标大小。 - --target_size_ratio <number>
-
The estimated target ratio of the pool for the PG autoscaler.
PG 自动扩展器的池估计目标比例。
pveceph purge [OPTIONS] pveceph purge [选项]
Destroy ceph related data and configuration files.
销毁与 Ceph 相关的数据和配置文件。
- --crash <boolean> --crash <布尔值>
-
Additionally purge Ceph crash logs, /var/lib/ceph/crash.
另外清除 Ceph 崩溃日志,位于 /var/lib/ceph/crash。 - --logs <boolean> --logs <布尔值>
-
Additionally purge Ceph logs, /var/log/ceph.
另外清除 Ceph 日志,/var/log/ceph。
pveceph start [OPTIONS] pveceph start [选项]
Start ceph services. 启动 ceph 服务。
-
--service (ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)? (default = ceph.target)
--service (ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?(默认 = ceph.target) -
Ceph service name. Ceph 服务名称。
pveceph status pveceph 状态
Get Ceph Status. 获取 Ceph 状态。
pveceph stop [OPTIONS] pveceph 停止 [选项]
Stop ceph services. 停止 ceph 服务。
-
--service (ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)? (default = ceph.target)
--service (ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?(默认 = ceph.target) -
Ceph service name. Ceph 服务名称。
22.7. pvenode - Proxmox VE Node Management
22.7. pvenode - Proxmox VE 节点管理
pvenode <COMMAND> [ARGS] [OPTIONS]
pvenode <命令> [参数] [选项]
pvenode acme account deactivate [<name>]
pvenode acme account deactivate [<名称>]
Deactivate existing ACME account at CA.
停用 CA 上现有的 ACME 账户。
-
<name>: <name> (default = default)
<名称>:<名称>(默认 = default) -
ACME account config file name.
ACME 账户配置文件名。
pvenode acme account info [<name>] [FORMAT_OPTIONS]
pvenode acme 账户信息 [<name>] [格式选项]
Return existing ACME account information.
返回现有的 ACME 账户信息。
-
<name>: <name> (default = default)
<name>:<name>(默认 = default) -
ACME account config file name.
ACME 账户配置文件名。
pvenode acme account list
pvenode acme 账户列表
ACMEAccount index. ACME 账户索引。
pvenode acme account register [<name>] {<contact>} [OPTIONS]
pvenode acme 账户注册 [<name>] {<contact>} [选项]
Register a new ACME account with a compatible CA.
使用兼容的 CA 注册一个新的 ACME 账户。
-
<name>: <name> (default = default)
<name>:<name>(默认 = default) -
ACME account config file name.
ACME 账户配置文件名。 - <contact>: <string> <contact>:<string>
-
Contact email addresses.
联系电子邮件地址。 - --directory ^https?://.*
-
URL of ACME CA directory endpoint.
ACME CA 目录端点的 URL。
pvenode acme account update [<name>] [OPTIONS]
Update existing ACME account information with CA. Note: not specifying any
new account information triggers a refresh.
使用 CA 更新现有的 ACME 账户信息。注意:如果未指定任何新的账户信息,将触发刷新。
-
<name>: <name> (default = default)
<name>: <name>(默认 = default) -
ACME account config file name.
ACME 账户配置文件名。 - --contact <string>
-
Contact email addresses.
联系电子邮件地址。
pvenode acme cert order [OPTIONS]
pvenode acme cert order [选项]
Order a new certificate from ACME-compatible CA.
从兼容 ACME 的证书颁发机构订购新证书。
-
--force <boolean> (default = 0)
--force <布尔值>(默认 = 0) -
Overwrite existing custom certificate.
覆盖现有的自定义证书。
pvenode acme cert renew [OPTIONS]
pvenode acme cert renew [选项]
Renew existing certificate from CA.
从证书颁发机构续订现有证书。
-
--force <boolean> (default = 0)
--force <布尔值>(默认 = 0) -
Force renewal even if expiry is more than 30 days away.
即使证书有效期超过 30 天,也强制续订。
pvenode acme cert revoke
Revoke existing certificate from CA.
从证书颁发机构撤销现有证书。
pvenode acme plugin add <type> <id> [OPTIONS]
pvenode acme 插件添加 <type> <id> [选项]
Add ACME plugin configuration.
添加 ACME 插件配置。
- <type>: <dns | standalone>
-
ACME challenge type. ACME 挑战类型。
- <id>: <string>
-
ACME Plugin ID name
ACME 插件 ID 名称 - --api <1984hosting | acmedns | acmeproxy | active24 | ad | ali | alviy | anx | artfiles | arvan | aurora | autodns | aws | azion | azure | bookmyname | bunny | cf | clouddns | cloudns | cn | conoha | constellix | cpanel | curanet | cyon | da | ddnss | desec | df | dgon | dnsexit | dnshome | dnsimple | dnsservices | doapi | domeneshop | dp | dpi | dreamhost | duckdns | durabledns | dyn | dynu | dynv6 | easydns | edgedns | euserv | exoscale | fornex | freedns | gandi_livedns | gcloud | gcore | gd | geoscaling | googledomains | he | hetzner | hexonet | hostingde | huaweicloud | infoblox | infomaniak | internetbs | inwx | ionos | ionos_cloud | ipv64 | ispconfig | jd | joker | kappernet | kas | kinghost | knot | la | leaseweb | lexicon | limacity | linode | linode_v4 | loopia | lua | maradns | me | miab | misaka | myapi | mydevil | mydnsjp | mythic_beasts | namecheap | namecom | namesilo | nanelo | nederhost | neodigit | netcup | netlify | nic | njalla | nm | nsd | nsone | nsupdate | nw | oci | omglol | one | online | openprovider | openstack | opnsense | ovh | pdns | pleskxml | pointhq | porkbun | rackcorp | rackspace | rage4 | rcode0 | regru | scaleway | schlundtech | selectel | selfhost | servercow | simply | technitium | tele3 | tencent | timeweb | transip | udr | ultra | unoeuro | variomedia | veesp | vercel | vscale | vultr | websupport | west_cn | world4you | yandex360 | yc | zilore | zone | zoneedit | zonomi>
-
API plugin name API 插件名称
-
--data File with one key-value pair per line, will be base64url encode for storage in plugin config.
--data 每行包含一个键值对的文件,将以 base64url 编码存储在插件配置中。 -
DNS plugin data. (base64 encoded)
DNS 插件数据。(base64 编码) - --disable <boolean>
-
Flag to disable the config.
用于禁用配置的标志。 - --nodes <string>
-
List of cluster node names.
集群节点名称列表。 -
--validation-delay <integer> (0 - 172800) (default = 30)
--validation-delay <整数> (0 - 172800) (默认 = 30) -
Extra delay in seconds to wait before requesting validation. Allows to cope with a long TTL of DNS records.
在请求验证前额外等待的秒数。用于应对 DNS 记录的长 TTL。
pvenode acme plugin config <id> [FORMAT_OPTIONS]
pvenode acme 插件 配置 <id> [格式选项]
Get ACME plugin configuration.
获取 ACME 插件配置。
- <id>: <string>
-
Unique identifier for ACME plugin instance.
ACME 插件实例的唯一标识符。
pvenode acme plugin list [OPTIONS] [FORMAT_OPTIONS]
pvenode acme plugin list [选项] [格式选项]
ACME plugin index. ACME 插件索引。
- --type <dns | standalone>
-
Only list ACME plugins of a specific type
仅列出特定类型的 ACME 插件
pvenode acme plugin remove <id>
Delete ACME plugin configuration.
删除 ACME 插件配置。
- <id>: <string>
-
Unique identifier for ACME plugin instance.
ACME 插件实例的唯一标识符。
pvenode acme plugin set <id> [OPTIONS]
Update ACME plugin configuration.
更新 ACME 插件配置。
- <id>: <string>
-
ACME Plugin ID name
ACME 插件 ID 名称 - --api <1984hosting | acmedns | acmeproxy | active24 | ad | ali | alviy | anx | artfiles | arvan | aurora | autodns | aws | azion | azure | bookmyname | bunny | cf | clouddns | cloudns | cn | conoha | constellix | cpanel | curanet | cyon | da | ddnss | desec | df | dgon | dnsexit | dnshome | dnsimple | dnsservices | doapi | domeneshop | dp | dpi | dreamhost | duckdns | durabledns | dyn | dynu | dynv6 | easydns | edgedns | euserv | exoscale | fornex | freedns | gandi_livedns | gcloud | gcore | gd | geoscaling | googledomains | he | hetzner | hexonet | hostingde | huaweicloud | infoblox | infomaniak | internetbs | inwx | ionos | ionos_cloud | ipv64 | ispconfig | jd | joker | kappernet | kas | kinghost | knot | la | leaseweb | lexicon | limacity | linode | linode_v4 | loopia | lua | maradns | me | miab | misaka | myapi | mydevil | mydnsjp | mythic_beasts | namecheap | namecom | namesilo | nanelo | nederhost | neodigit | netcup | netlify | nic | njalla | nm | nsd | nsone | nsupdate | nw | oci | omglol | one | online | openprovider | openstack | opnsense | ovh | pdns | pleskxml | pointhq | porkbun | rackcorp | rackspace | rage4 | rcode0 | regru | scaleway | schlundtech | selectel | selfhost | servercow | simply | technitium | tele3 | tencent | timeweb | transip | udr | ultra | unoeuro | variomedia | veesp | vercel | vscale | vultr | websupport | west_cn | world4you | yandex360 | yc | zilore | zone | zoneedit | zonomi>
-
API plugin name API 插件名称
-
--data File with one key-value pair per line, will be base64url encode for storage in plugin config.
--data 每行包含一个键值对的文件,将以 base64url 编码存储在插件配置中。 -
DNS plugin data. (base64 encoded)
DNS 插件数据。(base64 编码) - --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --digest <string>
-
Prevent changes if current configuration file has a different digest. This can be used to prevent concurrent modifications.
如果当前配置文件的摘要不同,则阻止更改。此功能可用于防止并发修改。 - --disable <boolean>
-
Flag to disable the config.
用于禁用配置的标志。 - --nodes <string>
-
List of cluster node names.
集群节点名称列表。 -
--validation-delay <integer> (0 - 172800) (default = 30)
--validation-delay <整数> (0 - 172800) (默认 = 30) -
Extra delay in seconds to wait before requesting validation. Allows to cope with a long TTL of DNS records.
在请求验证前额外等待的秒数。用于应对 DNS 记录的长 TTL。
pvenode cert delete [<restart>]
DELETE custom certificate chain and key.
删除自定义证书链和密钥。
-
<restart>: <boolean> (default = 0)
<restart>: <布尔值>(默认 = 0) -
Restart pveproxy. 重启 pveproxy。
pvenode cert info [FORMAT_OPTIONS]
pvenode cert info [格式选项]
Get information about node’s certificates.
获取节点证书的信息。
pvenode cert set <certificates> [<key>] [OPTIONS] [FORMAT_OPTIONS]
pvenode cert set <certificates> [<key>] [选项] [格式选项]
Upload or update custom certificate chain and key.
上传或更新自定义证书链和密钥。
-
<certificates>: <string>
<certificates>: <字符串> -
PEM encoded certificate (chain).
PEM 编码的证书(链)。 - <key>: <string>
-
PEM encoded private key.
PEM 编码的私钥。 -
--force <boolean> (default = 0)
--force <boolean>(默认 = 0) -
Overwrite existing custom or ACME certificate files.
覆盖现有的自定义或 ACME 证书文件。 -
--restart <boolean> (default = 0)
--restart <boolean>(默认 = 0) -
Restart pveproxy. 重启 pveproxy。
pvenode config get [OPTIONS]
pvenode config get [选项]
Get node configuration options.
获取节点配置选项。
-
--property <acme | acmedomain0 | acmedomain1 | acmedomain2 | acmedomain3 | acmedomain4 | acmedomain5 | ballooning-target | description | startall-onboot-delay | wakeonlan> (default = all)
--property <acme | acmedomain0 | acmedomain1 | acmedomain2 | acmedomain3 | acmedomain4 | acmedomain5 | ballooning-target | description | startall-onboot-delay | wakeonlan>(默认 = all) -
Return only a specific property from the node configuration.
仅返回节点配置中的特定属性。
pvenode config set [OPTIONS]
pvenode config set [选项]
Set node configuration options.
设置节点配置选项。
- --acme [account=<name>] [,domains=<domain[;domain;...]>]
-
Node specific ACME settings.
节点特定的 ACME 设置。 - --acmedomain[n] [domain=]<domain> [,alias=<domain>] [,plugin=<name of the plugin configuration>]
-
ACME domain and validation plugin
ACME 域名和验证插件 -
--ballooning-target <integer> (0 - 100) (default = 80)
--ballooning-target <整数> (0 - 100) (默认 = 80) -
RAM usage target for ballooning (in percent of total memory)
气球内存目标使用率(占总内存的百分比) - --delete <string> --delete <字符串>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --description <string>
-
Description for the Node. Shown in the web-interface node notes panel. This is saved as comment inside the configuration file.
节点的描述。在网页界面的节点备注面板中显示。此信息作为注释保存在配置文件中。 - --digest <string>
-
Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.
如果当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于防止并发修改。 -
--startall-onboot-delay <integer> (0 - 300) (default = 0)
--startall-onboot-delay <整数> (0 - 300) (默认 = 0) -
Initial delay in seconds, before starting all the Virtual Guests with on-boot enabled.
启动所有启用了开机启动的虚拟客户机之前的初始延迟时间,单位为秒。 -
--wakeonlan [mac=]<MAC address> [,bind-interface=<bind interface>] [,broadcast-address=<IPv4 broadcast address>]
--wakeonlan [mac=]<MAC 地址> [,bind-interface=<绑定接口>] [,broadcast-address=<IPv4 广播地址>] -
Node specific wake on LAN settings.
节点特定的唤醒局域网设置。
pvenode help [OPTIONS] pvenode help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助。 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pvenode migrateall <target> [OPTIONS]
Migrate all VMs and Containers.
迁移所有虚拟机和容器。
- <target>: <string>
-
Target node. 目标节点。
- --maxworkers <integer> (1 - N)
-
Maximal number of parallel migration job. If not set, uses’max_workers' from datacenter.cfg. One of both must be set!
最大并行迁移任务数。如果未设置,则使用 datacenter.cfg 中的 'max_workers'。两者中必须设置一个! - --vms <string>
-
Only consider Guests with these IDs.
仅考虑具有这些 ID 的虚拟机。 - --with-local-disks <boolean>
-
Enable live storage migration for local disk
启用本地磁盘的在线存储迁移。
pvenode startall [OPTIONS]
pvenode startall [选项]
Start all VMs and containers located on this node (by default only those
with onboot=1).
启动位于此节点上的所有虚拟机和容器(默认仅启动 onboot=1 的虚拟机和容器)。
-
--force <boolean> (default = off)
--force <布尔值>(默认 = 关闭) -
Issue start command even if virtual guest have onboot not set or set to off.
即使虚拟机的 onboot 未设置或设置为关闭,也强制发出启动命令。 - --vms <string> --vms <字符串>
-
Only consider guests from this comma separated list of VMIDs.
仅考虑此以逗号分隔的 VMID 列表中的客户机。
pvenode stopall [OPTIONS]
pvenode stopall [选项]
Stop all VMs and Containers.
停止所有虚拟机和容器。
-
--force-stop <boolean> (default = 1)
--force-stop <boolean>(默认值 = 1) -
Force a hard-stop after the timeout.
超时后强制硬停止。 -
--timeout <integer> (0 - 7200) (default = 180)
--timeout <integer>(0 - 7200)(默认值 = 180) -
Timeout for each guest shutdown task. Depending on force-stop, the shutdown gets then simply aborted or a hard-stop is forced.
每个虚拟机关闭任务的超时时间。根据 force-stop 的设置,关闭操作要么被简单中止,要么强制硬停止。 - --vms <string> --vms <字符串>
-
Only consider Guests with these IDs.
仅考虑具有这些 ID 的客户机。
pvenode task list [OPTIONS] [FORMAT_OPTIONS]
pvenode task list [选项] [格式选项]
Read task list for one node (finished tasks).
读取一个节点的任务列表(已完成的任务)。
-
--errors <boolean> (default = 0)
--errors <boolean>(默认值 = 0) -
Only list tasks with a status of ERROR.
仅列出状态为 ERROR 的任务。 -
--limit <integer> (0 - N) (default = 50)
--limit <integer>(0 - N)(默认值 = 50) -
Only list this amount of tasks.
仅列出此数量的任务。 - --since <integer> --since <整数>
-
Only list tasks since this UNIX epoch.
仅列出自该 UNIX 纪元以来的任务。 -
--source <active | all | archive> (default = archive)
--source <active | all | archive>(默认 = archive) -
List archived, active or all tasks.
列出已归档、活动或所有任务。 -
--start <integer> (0 - N) (default = 0)
--start <整数> (0 - N) (默认 = 0) -
List tasks beginning from this offset.
从此偏移量开始列出任务。 - --statusfilter <string> --statusfilter <字符串>
-
List of Task States that should be returned.
应返回的任务状态列表。 - --typefilter <string>
-
Only list tasks of this type (e.g., vzstart, vzdump).
仅列出此类型的任务(例如,vzstart,vzdump)。 - --until <integer>
-
Only list tasks until this UNIX epoch.
仅列出直到此 UNIX 时间戳的任务。 - --userfilter <string>
-
Only list tasks from this user.
仅列出该用户的任务。 - --vmid <integer> (100 - 999999999)
-
Only list tasks for this VM.
仅列出该虚拟机的任务。
pvenode task log <upid> [OPTIONS]
pvenode 任务日志 <upid> [选项]
Read task log. 读取任务日志。
- <upid>: <string> <upid>: <字符串>
-
The task’s unique ID.
任务的唯一 ID。 - --download <boolean>
-
Whether the tasklog file should be downloaded. This parameter can’t be used in conjunction with other parameters
是否应下载任务日志文件。此参数不能与其他参数同时使用 -
--start <integer> (0 - N) (default = 0)
--start <integer> (0 - N) (默认 = 0) -
Start at this line when reading the tasklog
读取任务日志时从此行开始
pvenode task status <upid> [FORMAT_OPTIONS]
pvenode 任务状态 <upid> [格式选项]
Read task status. 读取任务状态。
- <upid>: <string> <upid>:<字符串>
-
The task’s unique ID.
任务的唯一 ID。
pvenode wakeonlan <node> pvenode wakeonlan <节点>
Try to wake a node via wake on LAN network packet.
尝试通过局域网唤醒网络数据包唤醒节点。
- <node>: <string> <节点>: <字符串>
-
target node for wake on LAN packet
用于局域网唤醒数据包的目标节点
22.8. pvesh - Shell interface for the Proxmox VE API
22.8. pvesh - Proxmox VE API 的 Shell 接口
pvesh <COMMAND> [ARGS] [OPTIONS]
pvesh <命令> [参数] [选项]
pvesh create <api_path> [OPTIONS] [FORMAT_OPTIONS]
pvesh create <api 路径> [选项] [格式选项]
Call API POST on <api_path>.
对 <api 路径> 调用 API POST。
- <api_path>: <string>
-
API path. API 路径。
- --noproxy <boolean>
-
Disable automatic proxying.
禁用自动代理。
pvesh delete <api_path> [OPTIONS] [FORMAT_OPTIONS]
pvesh delete <api_path> [选项] [格式选项]
Call API DELETE on <api_path>.
对 <api_path> 调用 API DELETE。
- <api_path>: <string> <api_path>: <字符串>
-
API path. API 路径。
- --noproxy <boolean>
-
Disable automatic proxying.
禁用自动代理。
pvesh get <api_path> [OPTIONS] [FORMAT_OPTIONS]
Call API GET on <api_path>.
对 <api_path> 调用 API GET。
- <api_path>: <string>
-
API path. API 路径。
- --noproxy <boolean>
-
Disable automatic proxying.
禁用自动代理。
pvesh help [OPTIONS] pvesh help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pvesh ls <api_path> [OPTIONS] [FORMAT_OPTIONS]
List child objects on <api_path>.
列出 <api_path> 上的子对象。
- <api_path>: <string>
-
API path. API 路径。
- --noproxy <boolean>
-
Disable automatic proxying.
禁用自动代理。
pvesh set <api_path> [OPTIONS] [FORMAT_OPTIONS]
pvesh set <api_path> [选项] [格式选项]
Call API PUT on <api_path>.
对 <api_path> 调用 API PUT。
- <api_path>: <string> <api_path>:<字符串>
-
API path. API 路径。
- --noproxy <boolean>
-
Disable automatic proxying.
禁用自动代理。
pvesh usage <api_path> [OPTIONS]
pvesh 使用方法 <api_path> [选项]
print API usage information for <api_path>.
打印 <api_path> 的 API 使用信息。
- <api_path>: <string>
-
API path. API 路径。
- --command <create | delete | get | set>
-
API command. API 命令。
- --returns <boolean>
-
Including schema for returned data.
包含返回数据的模式。 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
22.9. qm - QEMU/KVM Virtual Machine Manager
22.9. qm - QEMU/KVM 虚拟机管理器
qm <COMMAND> [ARGS] [OPTIONS]
qm <命令> [参数] [选项]
qm agent
An alias for qm guest cmd.
qm guest cmd 的别名。
qm cleanup <vmid> <clean-shutdown> <guest-requested>
Cleans up resources like tap devices, vgpus, etc. Called after a vm shuts
down, crashes, etc.
清理诸如 tap 设备、vgpu 等资源。在虚拟机关闭、崩溃等情况后调用。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <clean-shutdown>: <boolean>
-
Indicates if qemu shutdown cleanly.
指示 qemu 是否已正常关闭。 - <guest-requested>: <boolean>
-
Indicates if the shutdown was requested by the guest or via qmp.
指示关闭是否由客户机或通过 qmp 请求。
qm clone <vmid> <newid> [OPTIONS]
qm clone <vmid> <newid> [选项]
Create a copy of virtual machine/template.
创建虚拟机/模板的副本。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<newid>: <integer> (100 - 999999999)
<newid>: <整数> (100 - 999999999) -
VMID for the clone.
克隆的 VMID。 -
--bwlimit <integer> (0 - N) (default = clone limit from datacenter or storage config)
--bwlimit <整数> (0 - N) (默认 = 来自数据中心或存储配置的克隆限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 - --description <string>
-
Description for the new VM.
新虚拟机的描述。 - --format <qcow2 | raw | vmdk>
-
Target format for file storage. Only valid for full clone.
文件存储的目标格式。仅对完全克隆有效。 - --full <boolean>
-
Create a full copy of all disks. This is always done when you clone a normal VM. For VM templates, we try to create a linked clone by default.
创建所有磁盘的完整副本。克隆普通虚拟机时总是执行此操作。对于虚拟机模板,我们默认尝试创建一个链接克隆。 - --name <string>
-
Set a name for the new VM.
为新虚拟机设置名称。 - --pool <string>
-
Add the new VM to the specified pool.
将新的虚拟机添加到指定的资源池。 - --snapname <string>
-
The name of the snapshot.
快照的名称。 - --storage <storage ID> --storage <存储 ID>
-
Target storage for full clone.
完整克隆的目标存储。 - --target <string> --target <字符串>
-
Target node. Only allowed if the original VM is on shared storage.
目标节点。仅当原始虚拟机位于共享存储上时允许使用。
qm cloudinit dump <vmid> <type>
Get automatically generated cloudinit config.
获取自动生成的 cloudinit 配置。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <type>: <meta | network | user>
-
Config type. 配置类型。
qm cloudinit pending <vmid>
Get the cloudinit configuration with both current and pending values.
获取包含当前值和待处理值的 cloudinit 配置。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm cloudinit update <vmid>
Regenerate and change cloudinit config drive.
重新生成并更改 cloudinit 配置驱动。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm config <vmid> [OPTIONS]
qm config <vmid> [选项]
Get the virtual machine configuration with pending configuration changes
applied. Set the current parameter to get the current configuration
instead.
获取应用了待处理配置更改的虚拟机配置。设置当前参数以获取当前配置。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--current <boolean> (default = 0)
--current <布尔值>(默认 = 0) -
Get current values (instead of pending values).
获取当前值(而非待定值)。 - --snapshot <string>
-
Fetch config values from given snapshot.
从指定快照中获取配置值。
qm create <vmid> [OPTIONS]
Create or restore a virtual machine.
创建或恢复虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--acpi <boolean> (default = 1)
--acpi <布尔值>(默认 = 1) -
Enable/disable ACPI. 启用/禁用 ACPI。
- --affinity <string>
-
List of host cores used to execute guest processes, for example: 0,5,8-11
用于执行客户机进程的主机核心列表,例如:0,5,8-11 - --agent [enabled=]<1|0> [,freeze-fs-on-backup=<1|0>] [,fstrim_cloned_disks=<1|0>] [,type=<virtio|isa>]
-
Enable/disable communication with the QEMU Guest Agent and its properties.
启用/禁用与 QEMU 客户机代理的通信及其属性。 - --amd-sev [type=]<sev-type> [,allow-smt=<1|0>] [,kernel-hashes=<1|0>] [,no-debug=<1|0>] [,no-key-sharing=<1|0>]
-
Secure Encrypted Virtualization (SEV) features by AMD CPUs
AMD CPU 的安全加密虚拟化(SEV)功能 - --arch <aarch64 | x86_64>
-
Virtual processor architecture. Defaults to the host.
虚拟处理器架构。默认为主机架构。 - --archive <string>
-
The backup archive. Either the file system path to a .tar or .vma file (use - to pipe data from stdin) or a proxmox storage backup volume identifier.
备份归档。可以是指向.tar 或.vma 文件的文件系统路径(使用-表示从标准输入管道传输数据),也可以是 proxmox 存储备份卷标识符。 - --args <string>
-
Arbitrary arguments passed to kvm.
传递给 kvm 的任意参数。 -
--audio0 device=<ich9-intel-hda|intel-hda|AC97> [,driver=<spice|none>]
--audio0 设备=<ich9-intel-hda|intel-hda|AC97> [,驱动=<spice|none>] -
Configure a audio device, useful in combination with QXL/Spice.
配置音频设备,适用于与 QXL/Spice 结合使用。 -
--autostart <boolean> (default = 0)
--autostart <布尔值>(默认 = 0) -
Automatic restart after crash (currently ignored).
崩溃后自动重启(当前忽略)。 -
--balloon <integer> (0 - N)
--balloon <整数> (0 - N) -
Amount of target RAM for the VM in MiB. Using zero disables the ballon driver.
虚拟机目标内存大小,单位为 MiB。设置为零则禁用气球驱动。 -
--bios <ovmf | seabios> (default = seabios)
--bios <ovmf | seabios> (默认 = seabios) -
Select BIOS implementation.
选择 BIOS 实现方式。 - --boot [[legacy=]<[acdn]{1,4}>] [,order=<device[;device...]>]
-
Specify guest boot order. Use the order= sub-property as usage with no key or legacy= is deprecated.
指定客户机启动顺序。使用 order= 子属性,直接使用无键或 legacy= 的方式已被弃用。 - --bootdisk (ide|sata|scsi|virtio)\d+
-
Enable booting from specified disk. Deprecated: Use boot: order=foo;bar instead.
启用从指定磁盘启动。已弃用:请改用 boot: order=foo;bar。 -
--bwlimit <integer> (0 - N) (default = restore limit from datacenter or storage config)
--bwlimit <整数> (0 - N)(默认 = 从数据中心或存储配置中恢复限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 - --cdrom <volume> --cdrom <卷>
-
This is an alias for option -ide2
这是选项 -ide2 的别名。 -
--cicustom [meta=<volume>] [,network=<volume>] [,user=<volume>] [,vendor=<volume>]
--cicustom [meta=<卷>] [,network=<卷>] [,user=<卷>] [,vendor=<卷>] -
cloud-init: Specify custom files to replace the automatically generated ones at start.
cloud-init:指定自定义文件以替换启动时自动生成的文件。 - --cipassword <password> --cipassword <密码>
-
cloud-init: Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords.
cloud-init:分配给用户的密码。通常不建议使用此选项,建议使用 ssh 密钥。另外请注意,较旧版本的 cloud-init 不支持哈希密码。 - --citype <configdrive2 | nocloud | opennebula>
-
Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows.
指定 cloud-init 的配置格式。默认值取决于配置的操作系统类型(ostype)。我们对 Linux 使用 nocloud 格式,对 Windows 使用 configdrive2 格式。 -
--ciupgrade <boolean> (default = 1)
--ciupgrade <boolean> (默认 = 1) -
cloud-init: do an automatic package upgrade after the first boot.
cloud-init:首次启动后自动进行软件包升级。 - --ciuser <string> --ciuser <字符串>
-
cloud-init: User name to change ssh keys and password for instead of the image’s configured default user.
cloud-init:用于更改 SSH 密钥和密码的用户名,替代镜像中配置的默认用户。 -
--cores <integer> (1 - N) (default = 1)
--cores <整数> (1 - N) (默认 = 1) -
The number of cores per socket.
每个插槽的核心数。 -
--cpu [[cputype=]<string>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<enum>]
--cpu [[cputype=]<字符串>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<枚举>] -
Emulated CPU type. 模拟的 CPU 类型。
-
--cpulimit <number> (0 - 128) (default = 0)
--cpulimit <数字> (0 - 128) (默认 = 0) -
Limit of CPU usage.
CPU 使用限制。 -
--cpuunits <integer> (1 - 262144) (default = cgroup v1: 1024, cgroup v2: 100)
--cpuunits <整数> (1 - 262144) (默认 = cgroup v1: 1024, cgroup v2: 100) -
CPU weight for a VM, will be clamped to [1, 10000] in cgroup v2.
虚拟机的 CPU 权重,在 cgroup v2 中会限制在[1, 10000]范围内。 - --description <string> --description <字符串>
-
Description for the VM. Shown in the web-interface VM’s summary. This is saved as comment inside the configuration file.
虚拟机的描述。在网页界面虚拟机摘要中显示。此信息作为注释保存在配置文件中。 - --efidisk0 [file=]<volume> [,efitype=<2m|4m>] [,format=<enum>] [,import-from=<source volume>] [,pre-enrolled-keys=<1|0>] [,size=<DiskSize>]
-
Configure a disk for storing EFI vars. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Note that SIZE_IN_GiB is ignored here and that the default EFI vars are copied to the volume instead. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
配置用于存储 EFI 变量的磁盘。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。请注意,这里会忽略 SIZE_IN_GiB,默认的 EFI 变量会被复制到该卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 - --force <boolean>
-
Allow to overwrite existing VM.
允许覆盖已有的虚拟机。Requires option(s): archive
需要选项:archive - --freeze <boolean>
-
Freeze CPU at startup (use c monitor command to start execution).
启动时冻结 CPU(使用 c 监视器命令开始执行)。 - --hookscript <string>
-
Script that will be executed during various steps in the vms lifetime.
将在虚拟机生命周期的各个步骤中执行的脚本。 -
--hostpci[n] [[host=]<HOSTPCIID[;HOSTPCIID2...]>] [,device-id=<hex id>] [,legacy-igd=<1|0>] [,mapping=<mapping-id>] [,mdev=<string>] [,pcie=<1|0>] [,rombar=<1|0>] [,romfile=<string>] [,sub-device-id=<hex id>] [,sub-vendor-id=<hex id>] [,vendor-id=<hex id>] [,x-vga=<1|0>]
--hostpci[n] [[host=]<HOSTPCIID[;HOSTPCIID2...]>] [,device-id=<十六进制 ID>] [,legacy-igd=<1|0>] [,mapping=<映射 ID>] [,mdev=<字符串>] [,pcie=<1|0>] [,rombar=<1|0>] [,romfile=<字符串>] [,sub-device-id=<十六进制 ID>] [,sub-vendor-id=<十六进制 ID>] [,vendor-id=<十六进制 ID>] [,x-vga=<1|0>] -
Map host PCI devices into guest.
将主机 PCI 设备映射到客户机。 -
--hotplug <string> (default = network,disk,usb)
--hotplug <字符串>(默认 = network,disk,usb) -
Selectively enable hotplug features. This is a comma separated list of hotplug features: network, disk, cpu, memory, usb and cloudinit. Use 0 to disable hotplug completely. Using 1 as value is an alias for the default network,disk,usb. USB hotplugging is possible for guests with machine version >= 7.1 and ostype l26 or windows > 7.
选择性启用热插拔功能。这是一个以逗号分隔的热插拔功能列表:network、disk、cpu、memory、usb 和 cloudinit。使用 0 完全禁用热插拔。使用 1 作为值是默认启用 network、disk、usb 的别名。对于机器版本 >= 7.1 且操作系统类型为 l26 或 Windows > 7 的虚拟机,支持 USB 热插拔。 - --hugepages <1024 | 2 | any>
-
Enable/disable hugepages memory.
启用/禁用大页内存。 -
--ide[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,model=<model>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
--ide[n] [file=]<卷> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,import-from=<源卷>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,model=<型号>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<枚举>] [,wwn=<wwn>] -
Use volume as IDE hard disk or CD-ROM (n is 0 to 3). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
使用卷作为 IDE 硬盘或光驱(n 为 0 到 3)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--import-working-storage <storage ID>
--import-working-storage <存储 ID> -
A file-based storage with images content-type enabled, which is used as an intermediary extraction storage during import. Defaults to the source storage.
启用了镜像内容类型的基于文件的存储,用作导入过程中的中间提取存储。默认为源存储。 - --ipconfig[n] [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4Format/CIDR>] [,ip6=<IPv6Format/CIDR>]
-
cloud-init: Specify IP addresses and gateways for the corresponding interface.
cloud-init:为相应的接口指定 IP 地址和网关。IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.
IP 地址使用 CIDR 表示法,网关为可选项,但需要指定相同类型的 IP。The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit gateway should be provided. For IPv6 the special string auto can be used to use stateless autoconfiguration. This requires cloud-init 19.4 or newer.
特殊字符串 dhcp 可用于 IP 地址以使用 DHCP,在这种情况下不应提供显式的网关。对于 IPv6,可以使用特殊字符串 auto 来使用无状态自动配置。这需要 cloud-init 19.4 或更高版本。If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using dhcp on IPv4.
如果启用了 cloud-init 且未指定 IPv4 或 IPv6 地址,则默认使用 IPv4 的 dhcp。 - --ivshmem size=<integer> [,name=<string>]
-
Inter-VM shared memory. Useful for direct communication between VMs, or to the host.
虚拟机间共享内存。适用于虚拟机之间或与主机之间的直接通信。 -
--keephugepages <boolean> (default = 0)
--keephugepages <boolean>(默认值 = 0) -
Use together with hugepages. If enabled, hugepages will not not be deleted after VM shutdown and can be used for subsequent starts.
与 hugepages 一起使用。如果启用,VM 关闭后 hugepages 不会被删除,可以用于后续启动。 - --keyboard <da | de | de-ch | en-gb | en-us | es | fi | fr | fr-be | fr-ca | fr-ch | hu | is | it | ja | lt | mk | nl | no | pl | pt | pt-br | sl | sv | tr>
-
Keyboard layout for VNC server. This option is generally not required and is often better handled from within the guest OS.
VNC 服务器的键盘布局。此选项通常不需要,且通常更适合在客户操作系统内进行设置。 -
--kvm <boolean> (default = 1)
--kvm <boolean>(默认 = 1) -
Enable/disable KVM hardware virtualization.
启用/禁用 KVM 硬件虚拟化。 - --live-restore <boolean>
-
Start the VM immediately while importing or restoring in the background.
在后台导入或恢复时立即启动虚拟机。 - --localtime <boolean>
-
Set the real time clock (RTC) to local time. This is enabled by default if the ostype indicates a Microsoft Windows OS.
将实时时钟(RTC)设置为本地时间。如果操作系统类型指示为 Microsoft Windows 操作系统,则默认启用此功能。 - --lock <backup | clone | create | migrate | rollback | snapshot | snapshot-delete | suspended | suspending>
-
Lock/unlock the VM. 锁定/解锁虚拟机。
-
--machine [[type=]<machine type>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>]
--machine [[type=]<机器类型>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>] -
Specify the QEMU machine.
指定 QEMU 机器。 -
--memory [current=]<integer>
--memory [current=]<整数> -
Memory properties. 内存属性。
-
--migrate_downtime <number> (0 - N) (default = 0.1)
--migrate_downtime <数字> (0 - N) (默认 = 0.1) -
Set maximum tolerated downtime (in seconds) for migrations. Should the migration not be able to converge in the very end, because too much newly dirtied RAM needs to be transferred, the limit will be increased automatically step-by-step until migration can converge.
设置迁移时允许的最大停机时间(秒)。如果迁移在最后阶段无法收敛,因为需要传输过多新脏的内存,限制将自动逐步增加,直到迁移能够收敛。 -
--migrate_speed <integer> (0 - N) (default = 0)
--migrate_speed <整数> (0 - N) (默认 = 0) -
Set maximum speed (in MB/s) for migrations. Value 0 is no limit.
设置迁移的最大速度(MB/s)。值为 0 表示无限制。 - --name <string>
-
Set a name for the VM. Only used on the configuration web interface.
为虚拟机设置名称。仅在配置网页界面中使用。 - --nameserver <string>
-
cloud-init: Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 服务器 IP 地址。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 -
--net[n] [model=]<enum> [,bridge=<bridge>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,mtu=<integer>] [,queues=<integer>] [,rate=<number>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>]
--net[n] [model=]<枚举> [,bridge=<桥>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,mtu=<整数>] [,queues=<整数>] [,rate=<数字>] [,tag=<整数>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>] -
Specify network devices.
指定网络设备。 -
--numa <boolean> (default = 0)
--numa <布尔值>(默认 = 0) -
Enable/disable NUMA. 启用/禁用 NUMA。
-
--numa[n] cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>]
--numa[n] cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<数字>] [,policy=<preferred|bind|interleave>] -
NUMA topology. NUMA 拓扑结构。
-
--onboot <boolean> (default = 0)
--onboot <布尔值>(默认 = 0) -
Specifies whether a VM will be started during system bootup.
指定虚拟机是否在系统启动时自动启动。 - --ostype <l24 | l26 | other | solaris | w2k | w2k3 | w2k8 | win10 | win11 | win7 | win8 | wvista | wxp>
-
Specify guest operating system.
指定客户机操作系统。 - --parallel[n] /dev/parport\d+|/dev/usb/lp\d+
-
Map host parallel devices (n is 0 to 2).
映射主机并行设备(n 为 0 到 2)。 - --pool <string>
-
Add the VM to the specified pool.
将虚拟机添加到指定的资源池。 -
--protection <boolean> (default = 0)
--protection <boolean> (默认 = 0) -
Sets the protection flag of the VM. This will disable the remove VM and remove disk operations.
设置虚拟机的保护标志。此操作将禁用删除虚拟机和删除磁盘的操作。 -
--reboot <boolean> (default = 1)
--reboot <boolean>(默认 = 1) -
Allow reboot. If set to 0 the VM exit on reboot.
允许重启。如果设置为 0,虚拟机在重启时退出。 - --rng0 [source=]</dev/urandom|/dev/random|/dev/hwrng> [,max_bytes=<integer>] [,period=<integer>]
-
Configure a VirtIO-based Random Number Generator.
配置基于 VirtIO 的随机数生成器。 - --sata[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
-
Use volume as SATA hard disk or CD-ROM (n is 0 to 5). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
将卷用作 SATA 硬盘或光驱(n 为 0 到 5)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--scsi[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,product=<product>] [,queues=<integer>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,vendor=<vendor>] [,werror=<enum>] [,wwn=<wwn>]
--scsi[n] [file=]<卷> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,import-from=<源卷>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,product=<产品>] [,queues=<整数>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,vendor=<厂商>] [,werror=<枚举>] [,wwn=<wwn>] -
Use volume as SCSI hard disk or CD-ROM (n is 0 to 30). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
将卷用作 SCSI 硬盘或 CD-ROM(n 的取值范围为 0 到 30)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配一个新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--scsihw <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single> (default = lsi)
--scsihw <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single>(默认 = lsi) -
SCSI controller model SCSI 控制器型号
- --searchdomain <string> --searchdomain <字符串>
-
cloud-init: Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 搜索域。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 - --serial[n] (/dev/.+|socket)
-
Create a serial device inside the VM (n is 0 to 3)
在虚拟机内创建一个串行设备(n 为 0 到 3) -
--shares <integer> (0 - 50000) (default = 1000)
--shares <整数>(0 - 50000)(默认值 = 1000) -
Amount of memory shares for auto-ballooning. The larger the number is, the more memory this VM gets. Number is relative to weights of all other running VMs. Using zero disables auto-ballooning. Auto-ballooning is done by pvestatd.
自动气球内存份额的数量。数字越大,该虚拟机获得的内存越多。该数字相对于所有其他正在运行的虚拟机的权重。使用零将禁用自动气球。自动气球由 pvestatd 执行。 -
--smbios1 [base64=<1|0>] [,family=<Base64 encoded string>] [,manufacturer=<Base64 encoded string>] [,product=<Base64 encoded string>] [,serial=<Base64 encoded string>] [,sku=<Base64 encoded string>] [,uuid=<UUID>] [,version=<Base64 encoded string>]
--smbios1 [base64=<1|0>] [,family=<Base64 编码字符串>] [,manufacturer=<Base64 编码字符串>] [,product=<Base64 编码字符串>] [,serial=<Base64 编码字符串>] [,sku=<Base64 编码字符串>] [,uuid=<UUID>] [,version=<Base64 编码字符串>] -
Specify SMBIOS type 1 fields.
指定 SMBIOS 类型 1 字段。 -
--smp <integer> (1 - N) (default = 1)
--smp <整数> (1 - N) (默认 = 1) -
The number of CPUs. Please use option -sockets instead.
CPU 数量。请改用选项 -sockets。 -
--sockets <integer> (1 - N) (default = 1)
--sockets <整数> (1 - N) (默认 = 1) -
The number of CPU sockets.
CPU 插槽数量。 - --spice_enhancements [foldersharing=<1|0>] [,videostreaming=<off|all|filter>]
-
Configure additional enhancements for SPICE.
配置 SPICE 的额外增强功能。 - --sshkeys <filepath> --sshkeys <文件路径>
-
cloud-init: Setup public SSH keys (one key per line, OpenSSH format).
cloud-init:设置公共 SSH 密钥(每行一个密钥,OpenSSH 格式)。 -
--start <boolean> (default = 0)
--start <布尔值>(默认 = 0) -
Start VM after it was created successfully.
在虚拟机成功创建后启动虚拟机。 -
--startdate (now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS) (default = now)
--startdate(now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS)(默认值 = now) -
Set the initial date of the real time clock. Valid format for date are:'now' or 2006-06-17T16:01:21 or 2006-06-17.
设置实时时钟的初始日期。有效的日期格式为:“now”或 2006-06-17T16:01:21 或 2006-06-17。 - --startup `[[order=]\d+] [,up=\d+] [,down=\d+] `
-
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。顺序是一个非负数,定义了一般的启动顺序。关闭时按相反顺序进行。此外,您可以设置启动或关闭的延迟时间(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的时间。 - --storage <storage ID> --storage <存储 ID>
-
Default storage. 默认存储。
-
--tablet <boolean> (default = 1)
--tablet <布尔值>(默认 = 1) -
Enable/disable the USB tablet device.
启用/禁用 USB 平板设备。 - --tags <string>
-
Tags of the VM. This is only meta information.
虚拟机的标签。这只是元信息。 -
--tdf <boolean> (default = 0)
--tdf <boolean>(默认值 = 0) -
Enable/disable time drift fix.
启用/禁用时间漂移修正。 -
--template <boolean> (default = 0)
--template <boolean>(默认 = 0) -
Enable/disable Template.
启用/禁用模板。 -
--tpmstate0 [file=]<volume> [,import-from=<source volume>] [,size=<DiskSize>] [,version=<v1.2|v2.0>]
--tpmstate0 [file=]<卷> [,import-from=<源卷>] [,size=<磁盘大小>] [,version=<v1.2|v2.0>] -
Configure a Disk for storing TPM state. The format is fixed to raw. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Note that SIZE_IN_GiB is ignored here and 4 MiB will be used instead. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
配置用于存储 TPM 状态的磁盘。格式固定为 raw。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。请注意,这里会忽略 SIZE_IN_GiB,改用 4 MiB。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 - --unique <boolean>
-
Assign a unique random ethernet address.
分配唯一的随机以太网地址。Requires option(s): archive
需要选项:archive - --unused[n] [file=]<volume>
-
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。这是内部使用的,不应手动修改。 - --usb[n] [[host=]<HOSTUSBDEVICE|spice>] [,mapping=<mapping-id>] [,usb3=<1|0>]
-
Configure an USB device (n is 0 to 4, for machine version >= 7.1 and ostype l26 or windows > 7, n can be up to 14).
配置一个 USB 设备(n 为 0 到 4,对于机器版本>=7.1 且操作系统类型为 l26 或 Windows > 7,n 最多可达 14)。 -
--vcpus <integer> (1 - N) (default = 0)
--vcpus <整数> (1 - N) (默认 = 0) -
Number of hotplugged vcpus.
热插拔 vcpus 的数量。 -
--vga [[type=]<enum>] [,clipboard=<vnc>] [,memory=<integer>]
--vga [[type=]<枚举>] [,clipboard=<vnc>] [,memory=<整数>] -
Configure the VGA hardware.
配置 VGA 硬件。 - --virtio[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>]
-
Use volume as VIRTIO hard disk (n is 0 to 15). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
使用卷作为 VIRTIO 硬盘(n 为 0 到 15)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--virtiofs[n] [dirid=]<mapping-id> [,cache=<enum>] [,direct-io=<1|0>] [,expose-acl=<1|0>] [,expose-xattr=<1|0>]
--virtiofs[n] [dirid=]<mapping-id> [,cache=<枚举>] [,direct-io=<1|0>] [,expose-acl=<1|0>] [,expose-xattr=<1|0>] -
Configuration for sharing a directory between host and guest using Virtio-fs.
使用 Virtio-fs 在主机和虚拟机之间共享目录的配置。 -
--vmgenid <UUID> (default = 1 (autogenerated))
--vmgenid <UUID>(默认 = 1(自动生成)) -
Set VM Generation ID. Use 1 to autogenerate on create or update, pass 0 to disable explicitly.
设置虚拟机生成 ID。使用 1 表示在创建或更新时自动生成,传入 0 表示显式禁用。 -
--vmstatestorage <storage ID>
--vmstatestorage <存储 ID> -
Default storage for VM state volumes/files.
虚拟机状态卷/文件的默认存储。 -
--watchdog [[model=]<i6300esb|ib700>] [,action=<enum>]
--watchdog [[model=]<i6300esb|ib700>] [,action=<枚举>] -
Create a virtual hardware watchdog device.
创建一个虚拟硬件看门狗设备。
qm delsnapshot <vmid> <snapname> [OPTIONS]
qm delsnapshot <vmid> <snapname> [选项]
Delete a VM snapshot. 删除虚拟机快照。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <snapname>: <string>
-
The name of the snapshot.
快照的名称。 - --force <boolean>
-
For removal from config file, even if removing disk snapshots fails.
即使删除磁盘快照失败,也从配置文件中移除。
qm destroy <vmid> [OPTIONS]
qm destroy <vmid> [选项]
Destroy the VM and all used/owned volumes. Removes any VM specific
permissions and firewall rules
销毁虚拟机及其所有使用/拥有的卷。移除任何虚拟机特定的权限和防火墙规则
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--destroy-unreferenced-disks <boolean> (default = 0)
--destroy-unreferenced-disks <boolean>(默认值 = 0) -
If set, destroy additionally all disks not referenced in the config but with a matching VMID from all enabled storages.
如果设置,将销毁所有未在配置中引用但在所有启用的存储中具有匹配 VMID 的磁盘。 - --purge <boolean>
-
Remove VMID from configurations, like backup & replication jobs and HA.
从配置中移除 VMID,例如备份和复制任务以及高可用性(HA)。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定 - 仅允许 root 用户使用此选项。
qm disk import <vmid> <source> <storage> [OPTIONS]
Import an external disk image as an unused disk in a VM. The
image format has to be supported by qemu-img(1).
将外部磁盘镜像导入为虚拟机中的未使用磁盘。镜像格式必须被 qemu-img(1)支持。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <source>: <string> <source>: <字符串>
-
Path to the disk image to import
要导入的磁盘映像路径 - <storage>: <storage ID> <storage>: <存储 ID>
-
Target storage ID 目标存储 ID
- --format <qcow2 | raw | vmdk>
-
Target format 目标格式
- --target-disk <efidisk0 | ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 | sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 | scsi13 | scsi14 | scsi15 | scsi16 | scsi17 | scsi18 | scsi19 | scsi2 | scsi20 | scsi21 | scsi22 | scsi23 | scsi24 | scsi25 | scsi26 | scsi27 | scsi28 | scsi29 | scsi3 | scsi30 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8 | scsi9 | tpmstate0 | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 | virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 | virtio5 | virtio6 | virtio7 | virtio8 | virtio9>
-
The disk name where the volume will be imported to (e.g. scsi1).
卷将被导入到的磁盘名称(例如 scsi1)。
qm disk move <vmid> <disk> [<storage>] [OPTIONS]
qm disk move <vmid> <disk> [<storage>] [选项]
Move volume to different storage or to a different VM.
将卷移动到不同的存储或不同的虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<disk>: <efidisk0 | ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 | sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 | scsi13 | scsi14 | scsi15 | scsi16 | scsi17 | scsi18 | scsi19 | scsi2 | scsi20 | scsi21 | scsi22 | scsi23 | scsi24 | scsi25 | scsi26 | scsi27 | scsi28 | scsi29 | scsi3 | scsi30 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8 | scsi9 | tpmstate0 | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 | virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 | virtio5 | virtio6 | virtio7 | virtio8 | virtio9>
<磁盘>: <efidisk0 | ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 | sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 | scsi13 | scsi14 | scsi15 | scsi16 | scsi17 | scsi18 | scsi19 | scsi2 | scsi20 | scsi21 | scsi22 | scsi23 | scsi24 | scsi25 | scsi26 | scsi27 | scsi28 | scsi29 | scsi3 | scsi30 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8 | scsi9 | tpmstate0 | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 | virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 | virtio5 | virtio6 | virtio7 | virtio8 | virtio9> -
The disk you want to move.
您想要移动的磁盘。 - <storage>: <storage ID>
-
Target storage. 目标存储。
-
--bwlimit <integer> (0 - N) (default = move limit from datacenter or storage config)
--bwlimit <整数> (0 - N)(默认 = 从数据中心或存储配置中获取移动限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 -
--delete <boolean> (default = 0)
--delete <布尔值>(默认 = 0) -
Delete the original disk after successful copy. By default the original disk is kept as unused disk.
成功复制后删除原始磁盘。默认情况下,原始磁盘作为未使用磁盘保留。 - --digest <string>
-
Prevent changes if current configuration file has different SHA1" ." digest. This can be used to prevent concurrent modifications.
如果当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于防止并发修改。 - --format <qcow2 | raw | vmdk>
-
Target Format. 目标格式。
-
--target-digest <string>
--target-digest <字符串> -
Prevent changes if the current config file of the target VM has a" ." different SHA1 digest. This can be used to detect concurrent modifications.
如果目标虚拟机当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于检测并发修改。 -
--target-disk <efidisk0 | ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 | sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 | scsi13 | scsi14 | scsi15 | scsi16 | scsi17 | scsi18 | scsi19 | scsi2 | scsi20 | scsi21 | scsi22 | scsi23 | scsi24 | scsi25 | scsi26 | scsi27 | scsi28 | scsi29 | scsi3 | scsi30 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8 | scsi9 | tpmstate0 | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 | virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 | virtio5 | virtio6 | virtio7 | virtio8 | virtio9>
--target-disk <efidisk0 | ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 | sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 | scsi13 | scsi14 | scsi15 | scsi16 | scsi17 | scsi18 | scsi19 | scsi2 | scsi20 | scsi21 | scsi22 | scsi23 | scsi24 | scsi25 | scsi26 | scsi27 | scsi28 | scsi29 | scsi3 | scsi30 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8 | scsi9 | tpmstate0 | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | 未使用 251 | 未使用 252 | 未使用 253 | 未使用 254 | 未使用 255 | 未使用 26 | 未使用 27 | 未使用 28 | 未使用 29 | 未使用 3 | 未使用 30 | 未使用 31 | 未使用 32 | 未使用 33 | 未使用 34 | 未使用 35 | 未使用 36 | 未使用 37 | 未使用 38 | 未使用 39 | 未使用 4 | 未使用 40 | 未使用 41 | 未使用 42 | 未使用 43 | 未使用 44 | 未使用 45 | 未使用 46 | 未使用 47 | 未使用 48 | 未使用 49 | 未使用 5 | 未使用 50 | 未使用 51 | 未使用 52 | 未使用 53 | 未使用 54 | 未使用 55 | 未使用 56 | 未使用 57 | 未使用 58 | 未使用 59 | 未使用 6 | 未使用 60 | 未使用 61 | 未使用 62 | 未使用 63 | 未使用 64 | 未使用 65 | 未使用 66 | 未使用 67 | 未使用 68 | 未使用 69 | 未使用 7 | 未使用 70 | 未使用 71 | 未使用 72 | 未使用 73 | 未使用 74 | 未使用 75 | 未使用 76 | 未使用 77 | 未使用 78 | 未使用 79 | 未使用 8 | 未使用 80 | 未使用 81 | 未使用 82 | 未使用 83 | 未使用 84 | 未使用 85 | 未使用 86 | 未使用 87 | 未使用 88 | 未使用 89 | 未使用 9 | 未使用 90 | 未使用 91 | 未使用 92 | 未使用 93 | 未使用 94 | 未使用 95 | 未使用 96 | 未使用 97 | 未使用 98 | 未使用 99 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 | virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 | virtio5 | virtio6 | virtio7 | virtio8 | virtio9> -
The config key the disk will be moved to on the target VM (for example, ide0 or scsi1). Default is the source disk key.
磁盘将在目标虚拟机上移动到的配置键(例如,ide0 或 scsi1)。默认是源磁盘键。 -
--target-vmid <integer> (100 - 999999999)
--target-vmid <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm disk rescan [OPTIONS] qm disk rescan [选项]
Rescan all storages and update disk sizes and unused disk images.
重新扫描所有存储并更新磁盘大小及未使用的磁盘映像。
-
--dryrun <boolean> (default = 0)
--dryrun <boolean>(默认 = 0) -
Do not actually write changes out to VM config(s).
不实际写入更改到虚拟机配置。 -
--vmid <integer> (100 - 999999999)
--vmid <整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm disk resize <vmid> <disk> <size> [OPTIONS]
qm disk resize <vmid> <disk> <size> [选项]
Extend volume size. 扩展卷大小。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <disk>: <efidisk0 | ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 | sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 | scsi13 | scsi14 | scsi15 | scsi16 | scsi17 | scsi18 | scsi19 | scsi2 | scsi20 | scsi21 | scsi22 | scsi23 | scsi24 | scsi25 | scsi26 | scsi27 | scsi28 | scsi29 | scsi3 | scsi30 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8 | scsi9 | tpmstate0 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 | virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 | virtio5 | virtio6 | virtio7 | virtio8 | virtio9>
-
The disk you want to resize.
您想要调整大小的磁盘。 - <size>: \+?\d+(\.\d+)?[KMGT]?
-
The new size. With the + sign the value is added to the actual size of the volume and without it, the value is taken as an absolute one. Shrinking disk size is not supported.
新的大小。带有 + 符号时,数值将被加到卷的当前大小上;不带 + 符号时,数值被视为绝对值。不支持缩小磁盘大小。 - --digest <string>
-
Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.
如果当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于防止并发修改。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定——只有 root 用户被允许使用此选项。
qm disk unlink <vmid> --idlist <string> [OPTIONS]
qm disk unlink <vmid> --idlist <string> [选项]
Unlink/delete disk images.
取消链接/删除磁盘镜像。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --force <boolean>
-
Force physical removal. Without this, we simple remove the disk from the config file and create an additional configuration entry called unused[n], which contains the volume ID. Unlink of unused[n] always cause physical removal.
强制物理删除。没有此选项时,我们仅从配置文件中移除磁盘,并创建一个名为 unused[n]的额外配置条目,其中包含卷 ID。取消链接 unused[n]总是会导致物理删除。 - --idlist <string>
-
A list of disk IDs you want to delete.
要删除的磁盘 ID 列表。
qm guest cmd <vmid> <command>
Execute QEMU Guest Agent commands.
执行 QEMU 客户机代理命令。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<command>: <fsfreeze-freeze | fsfreeze-status | fsfreeze-thaw | fstrim | get-fsinfo | get-host-name | get-memory-block-info | get-memory-blocks | get-osinfo | get-time | get-timezone | get-users | get-vcpus | info | network-get-interfaces | ping | shutdown | suspend-disk | suspend-hybrid | suspend-ram>
<command>:<fsfreeze-freeze | fsfreeze-status | fsfreeze-thaw | fstrim | get-fsinfo | get-host-name | get-memory-block-info | get-memory-blocks | get-osinfo | get-time | get-timezone | get-users | get-vcpus | info | network-get-interfaces | ping | shutdown | suspend-disk | suspend-hybrid | suspend-ram> -
The QGA command. QGA 命令。
qm guest exec <vmid> [<extra-args>] [OPTIONS]
Executes the given command via the guest agent
通过客户机代理执行给定的命令
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <extra-args>: <array> <extra-args>: <数组>
-
Extra arguments as array
额外参数作为数组 -
--pass-stdin <boolean> (default = 0)
--pass-stdin <布尔值>(默认 = 0) -
When set, read STDIN until EOF and forward to guest agent via input-data (usually treated as STDIN to process launched by guest agent). Allows maximal 1 MiB.
设置后,读取 STDIN 直到 EOF,并通过 input-data 转发给客户代理(通常作为客户代理启动的进程的 STDIN 处理)。最大允许 1 MiB。 -
--synchronous <boolean> (default = 1)
--synchronous <布尔值>(默认 = 1) -
If set to off, returns the pid immediately instead of waiting for the command to finish or the timeout.
如果设置为关闭,则立即返回进程 ID,而不是等待命令完成或超时。 -
--timeout <integer> (0 - N) (default = 30)
--timeout <整数> (0 - N) (默认 = 30) -
The maximum time to wait synchronously for the command to finish. If reached, the pid gets returned. Set to 0 to deactivate
等待命令同步完成的最长时间。如果达到该时间,将返回进程 ID。设置为 0 表示禁用。
qm guest exec-status <vmid> <pid>
Gets the status of the given pid started by the guest-agent
获取由 guest-agent 启动的给定 pid 的状态
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <pid>: <integer> <pid>:<整数>
-
The PID to query
要查询的 PID
qm guest passwd <vmid> <username> [OPTIONS]
qm guest passwd <vmid> <username> [选项]
Sets the password for the given user to the given password
为指定用户设置密码
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <username>: <string> <username>: <字符串>
-
The user to set the password for.
要设置密码的用户。 -
--crypted <boolean> (default = 0)
--crypted <布尔值>(默认 = 0) -
set to 1 if the password has already been passed through crypt()
如果密码已经通过 crypt() 处理,则设置为 1
qm help [OPTIONS] qm help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
qm import <vmid> <source> --storage <string> [OPTIONS]
qm import <vmid> <source> --storage <string> [选项]
Import a foreign virtual guest from a supported import source, such as an
ESXi storage.
从支持的导入源(如 ESXi 存储)导入外部虚拟客户机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <source>: <string> <source>:<字符串>
-
The import source volume id.
导入源卷 ID。 -
--acpi <boolean> (default = 1)
--acpi <boolean>(默认 = 1) -
Enable/disable ACPI. 启用/禁用 ACPI。
- --affinity <string>
-
List of host cores used to execute guest processes, for example: 0,5,8-11
用于执行客户机进程的主机核心列表,例如:0,5,8-11 - --agent [enabled=]<1|0> [,freeze-fs-on-backup=<1|0>] [,fstrim_cloned_disks=<1|0>] [,type=<virtio|isa>]
-
Enable/disable communication with the QEMU Guest Agent and its properties.
启用/禁用与 QEMU 客户机代理及其属性的通信。 - --amd-sev [type=]<sev-type> [,allow-smt=<1|0>] [,kernel-hashes=<1|0>] [,no-debug=<1|0>] [,no-key-sharing=<1|0>]
-
Secure Encrypted Virtualization (SEV) features by AMD CPUs
AMD CPU 的安全加密虚拟化(SEV)功能 - --arch <aarch64 | x86_64>
-
Virtual processor architecture. Defaults to the host.
虚拟处理器架构。默认为主机架构。 - --args <string>
-
Arbitrary arguments passed to kvm.
传递给 kvm 的任意参数。 - --audio0 device=<ich9-intel-hda|intel-hda|AC97> [,driver=<spice|none>]
-
Configure a audio device, useful in combination with QXL/Spice.
配置音频设备,适用于与 QXL/Spice 结合使用。 -
--autostart <boolean> (default = 0)
--autostart <boolean>(默认值 = 0) -
Automatic restart after crash (currently ignored).
崩溃后自动重启(当前被忽略)。 -
--balloon <integer> (0 - N)
--balloon <整数> (0 - N) -
Amount of target RAM for the VM in MiB. Using zero disables the ballon driver.
虚拟机的目标内存大小,单位为 MiB。使用零值将禁用气球驱动。 -
--bios <ovmf | seabios> (default = seabios)
--bios <ovmf | seabios>(默认 = seabios) -
Select BIOS implementation.
选择 BIOS 实现。 - --boot [[legacy=]<[acdn]{1,4}>] [,order=<device[;device...]>]
-
Specify guest boot order. Use the order= sub-property as usage with no key or legacy= is deprecated.
指定客户机启动顺序。使用 order= 子属性,使用无键或 legacy= 的方式已被弃用。 - --bootdisk (ide|sata|scsi|virtio)\d+
-
Enable booting from specified disk. Deprecated: Use boot: order=foo;bar instead.
启用从指定磁盘启动。已弃用:请改用 boot: order=foo;bar。 - --cdrom <volume>
-
This is an alias for option -ide2
这是选项 -ide2 的别名 - --cicustom [meta=<volume>] [,network=<volume>] [,user=<volume>] [,vendor=<volume>]
-
cloud-init: Specify custom files to replace the automatically generated ones at start.
cloud-init:指定自定义文件以替换启动时自动生成的文件。 - --cipassword <string> --cipassword <字符串>
-
cloud-init: Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords.
cloud-init:分配给用户的密码。通常不建议使用此方法,建议使用 ssh 密钥。同时请注意,较旧版本的 cloud-init 不支持哈希密码。 - --citype <configdrive2 | nocloud | opennebula>
-
Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows.
指定 cloud-init 配置格式。默认值取决于配置的操作系统类型(ostype)。我们对 Linux 使用 nocloud 格式,对 Windows 使用 configdrive2。 -
--ciupgrade <boolean> (default = 1)
--ciupgrade <boolean>(默认 = 1) -
cloud-init: do an automatic package upgrade after the first boot.
cloud-init:首次启动后自动升级软件包。 - --ciuser <string>
-
cloud-init: User name to change ssh keys and password for instead of the image’s configured default user.
cloud-init:用于更改 SSH 密钥和密码的用户名,替代镜像中配置的默认用户。 -
--cores <integer> (1 - N) (default = 1)
--cores <整数>(1 - N)(默认 = 1) -
The number of cores per socket.
每个插槽的核心数。 -
--cpu [[cputype=]<string>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<enum>]
--cpu [[cputype=]<字符串>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<枚举>] -
Emulated CPU type. 模拟的 CPU 类型。
-
--cpulimit <number> (0 - 128) (default = 0)
--cpulimit <数字> (0 - 128) (默认 = 0) -
Limit of CPU usage.
CPU 使用限制。 -
--cpuunits <integer> (1 - 262144) (default = cgroup v1: 1024, cgroup v2: 100)
--cpuunits <整数> (1 - 262144) (默认 = cgroup v1: 1024, cgroup v2: 100) -
CPU weight for a VM, will be clamped to [1, 10000] in cgroup v2.
虚拟机的 CPU 权重,在 cgroup v2 中将被限制在[1, 10000]范围内。 - --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --description <string>
-
Description for the VM. Shown in the web-interface VM’s summary. This is saved as comment inside the configuration file.
虚拟机的描述。在网页界面虚拟机摘要中显示。此内容作为注释保存在配置文件中。 -
--dryrun <boolean> (default = 0)
--dryrun <布尔值>(默认 = 0) -
Show the create command and exit without doing anything.
显示创建命令并退出,不执行任何操作。 -
--efidisk0 [file=]<volume> [,efitype=<2m|4m>] [,format=<enum>] [,pre-enrolled-keys=<1|0>] [,size=<DiskSize>]
--efidisk0 [file=]<卷> [,efitype=<2m|4m>] [,format=<枚举>] [,pre-enrolled-keys=<1|0>] [,size=<磁盘大小>] -
Configure a disk for storing EFI vars.
配置用于存储 EFI 变量的磁盘。 - --format <qcow2 | raw | vmdk>
-
Target format 目标格式
- --freeze <boolean>
-
Freeze CPU at startup (use c monitor command to start execution).
启动时冻结 CPU(使用 c monitor 命令开始执行)。 - --hookscript <string>
-
Script that will be executed during various steps in the vms lifetime.
将在虚拟机生命周期的各个步骤中执行的脚本。 - --hostpci[n] [[host=]<HOSTPCIID[;HOSTPCIID2...]>] [,device-id=<hex id>] [,legacy-igd=<1|0>] [,mapping=<mapping-id>] [,mdev=<string>] [,pcie=<1|0>] [,rombar=<1|0>] [,romfile=<string>] [,sub-device-id=<hex id>] [,sub-vendor-id=<hex id>] [,vendor-id=<hex id>] [,x-vga=<1|0>]
-
Map host PCI devices into guest.
将主机 PCI 设备映射到客户机。 -
--hotplug <string> (default = network,disk,usb)
--hotplug <string>(默认 = network,disk,usb) -
Selectively enable hotplug features. This is a comma separated list of hotplug features: network, disk, cpu, memory, usb and cloudinit. Use 0 to disable hotplug completely. Using 1 as value is an alias for the default network,disk,usb. USB hotplugging is possible for guests with machine version >= 7.1 and ostype l26 or windows > 7.
选择性启用热插拔功能。这是一个以逗号分隔的热插拔功能列表:network、disk、cpu、memory、usb 和 cloudinit。使用 0 表示完全禁用热插拔。使用 1 作为值是默认 network,disk,usb 的别名。对于机器版本 >= 7.1 且操作系统类型为 l26 或 Windows > 7 的客户机,支持 USB 热插拔。 - --hugepages <1024 | 2 | any>
-
Enable/disable hugepages memory.
启用/禁用大页内存。 - --ide[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,model=<model>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
-
Use volume as IDE hard disk or CD-ROM (n is 0 to 3).
将卷用作 IDE 硬盘或光驱(n 为 0 到 3)。 - --ipconfig[n] [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4Format/CIDR>] [,ip6=<IPv6Format/CIDR>]
-
cloud-init: Specify IP addresses and gateways for the corresponding interface.
cloud-init:为相应的接口指定 IP 地址和网关。IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.
IP 地址使用 CIDR 表示法,网关是可选的,但需要指定相同类型的 IP。The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit gateway should be provided. For IPv6 the special string auto can be used to use stateless autoconfiguration. This requires cloud-init 19.4 or newer.
IP 地址可以使用特殊字符串 dhcp 来启用 DHCP,此时不应提供显式的网关。对于 IPv6,可以使用特殊字符串 auto 来启用无状态自动配置。这需要 cloud-init 19.4 或更高版本。If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using dhcp on IPv4.
如果启用了 cloud-init 且未指定 IPv4 或 IPv6 地址,则默认使用 IPv4 的 dhcp。 -
--ivshmem size=<integer> [,name=<string>]
--ivshmem size=<整数> [,name=<字符串>] -
Inter-VM shared memory. Useful for direct communication between VMs, or to the host.
虚拟机间共享内存。适用于虚拟机之间或与主机之间的直接通信。 -
--keephugepages <boolean> (default = 0)
--keephugepages <布尔值>(默认 = 0) -
Use together with hugepages. If enabled, hugepages will not not be deleted after VM shutdown and can be used for subsequent starts.
与大页一起使用。如果启用,大页在虚拟机关闭后不会被删除,可用于后续启动。 - --keyboard <da | de | de-ch | en-gb | en-us | es | fi | fr | fr-be | fr-ca | fr-ch | hu | is | it | ja | lt | mk | nl | no | pl | pt | pt-br | sl | sv | tr>
-
Keyboard layout for VNC server. This option is generally not required and is often better handled from within the guest OS.
VNC 服务器的键盘布局。此选项通常不需要,通常更适合在客户操作系统内进行设置。 -
--kvm <boolean> (default = 1)
--kvm <boolean> (默认 = 1) -
Enable/disable KVM hardware virtualization.
启用/禁用 KVM 硬件虚拟化。 -
--live-import <boolean> (default = 0)
--live-import <boolean>(默认值 = 0) -
Immediately start the VM and copy the data in the background.
立即启动虚拟机并在后台复制数据。 - --localtime <boolean>
-
Set the real time clock (RTC) to local time. This is enabled by default if the ostype indicates a Microsoft Windows OS.
将实时时钟(RTC)设置为本地时间。如果操作系统类型指示为 Microsoft Windows 操作系统,则默认启用此功能。 -
--lock <backup | clone | create | migrate | rollback | snapshot | snapshot-delete | suspended | suspending>
--lock <备份 | 克隆 | 创建 | 迁移 | 回滚 | 快照 | 删除快照 | 挂起 | 正在挂起> -
Lock/unlock the VM. 锁定/解锁虚拟机。
-
--machine [[type=]<machine type>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>]
--machine [[type=]<机器类型>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>] -
Specify the QEMU machine.
指定 QEMU 机器。 -
--memory [current=]<integer>
--memory [current=]<整数> -
Memory properties. 内存属性。
-
--migrate_downtime <number> (0 - N) (default = 0.1)
--migrate_downtime <数字> (0 - N) (默认 = 0.1) -
Set maximum tolerated downtime (in seconds) for migrations. Should the migration not be able to converge in the very end, because too much newly dirtied RAM needs to be transferred, the limit will be increased automatically step-by-step until migration can converge.
设置迁移的最大容忍停机时间(秒)。如果迁移在最后阶段无法收敛,因为需要传输过多新脏的内存,限制将自动逐步增加,直到迁移能够收敛。 -
--migrate_speed <integer> (0 - N) (default = 0)
--migrate_speed <整数> (0 - N) (默认 = 0) -
Set maximum speed (in MB/s) for migrations. Value 0 is no limit.
设置迁移的最大速度(单位:MB/s)。值为 0 表示无限制。 - --name <string> --name <字符串>
-
Set a name for the VM. Only used on the configuration web interface.
为虚拟机设置名称。仅在配置网页界面中使用。 - --nameserver <string>
-
cloud-init: Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 服务器 IP 地址。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 - --net[n] [model=]<enum> [,bridge=<bridge>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,mtu=<integer>] [,queues=<integer>] [,rate=<number>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>]
-
Specify network devices.
指定网络设备。 -
--numa <boolean> (default = 0)
--numa <布尔值>(默认 = 0) -
Enable/disable NUMA. 启用/禁用 NUMA。
-
--numa[n] cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>]
--numa[n] cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<数字>] [,policy=<preferred|bind|interleave>] -
NUMA topology. NUMA 拓扑结构。
-
--onboot <boolean> (default = 0)
--onboot <boolean>(默认值 = 0) -
Specifies whether a VM will be started during system bootup.
指定虚拟机是否在系统启动时自动启动。 - --ostype <l24 | l26 | other | solaris | w2k | w2k3 | w2k8 | win10 | win11 | win7 | win8 | wvista | wxp>
-
Specify guest operating system.
指定客户操作系统。 - --parallel[n] /dev/parport\d+|/dev/usb/lp\d+
-
Map host parallel devices (n is 0 to 2).
映射主机并行设备(n 的取值范围为 0 到 2)。 -
--protection <boolean> (default = 0)
--protection <boolean>(默认值 = 0) -
Sets the protection flag of the VM. This will disable the remove VM and remove disk operations.
设置虚拟机的保护标志。这将禁用删除虚拟机和删除磁盘操作。 -
--reboot <boolean> (default = 1)
--reboot <boolean>(默认 = 1) -
Allow reboot. If set to 0 the VM exit on reboot.
允许重启。如果设置为 0,虚拟机在重启时退出。 - --rng0 [source=]</dev/urandom|/dev/random|/dev/hwrng> [,max_bytes=<integer>] [,period=<integer>]
-
Configure a VirtIO-based Random Number Generator.
配置基于 VirtIO 的随机数生成器。 - --sata[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
-
Use volume as SATA hard disk or CD-ROM (n is 0 to 5).
将卷用作 SATA 硬盘或光驱(n 为 0 到 5)。 - --scsi[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,product=<product>] [,queues=<integer>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,vendor=<vendor>] [,werror=<enum>] [,wwn=<wwn>]
-
Use volume as SCSI hard disk or CD-ROM (n is 0 to 30).
使用卷作为 SCSI 硬盘或光驱(n 为 0 到 30)。 -
--scsihw <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single> (default = lsi)
--scsihw <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single>(默认 = lsi) -
SCSI controller model SCSI 控制器型号
- --searchdomain <string>
-
cloud-init: Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 搜索域。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 - --serial[n] (/dev/.+|socket)
-
Create a serial device inside the VM (n is 0 to 3)
在虚拟机内创建一个串行设备(n 为 0 到 3) -
--shares <integer> (0 - 50000) (default = 1000)
--shares <整数> (0 - 50000) (默认 = 1000) -
Amount of memory shares for auto-ballooning. The larger the number is, the more memory this VM gets. Number is relative to weights of all other running VMs. Using zero disables auto-ballooning. Auto-ballooning is done by pvestatd.
自动气球内存的份额数量。数字越大,该虚拟机获得的内存越多。该数字相对于所有其他正在运行的虚拟机的权重。使用零将禁用自动气球。自动气球由 pvestatd 执行。 -
--smbios1 [base64=<1|0>] [,family=<Base64 encoded string>] [,manufacturer=<Base64 encoded string>] [,product=<Base64 encoded string>] [,serial=<Base64 encoded string>] [,sku=<Base64 encoded string>] [,uuid=<UUID>] [,version=<Base64 encoded string>]
--smbios1 [base64=<1|0>] [,family=<Base64 编码字符串>] [,manufacturer=<Base64 编码字符串>] [,product=<Base64 编码字符串>] [,serial=<Base64 编码字符串>] [,sku=<Base64 编码字符串>] [,uuid=<UUID>] [,version=<Base64 编码字符串>] -
Specify SMBIOS type 1 fields.
指定 SMBIOS 类型 1 字段。 -
--smp <integer> (1 - N) (default = 1)
--smp <整数> (1 - N) (默认 = 1) -
The number of CPUs. Please use option -sockets instead.
CPU 的数量。请改用 -sockets 选项。 -
--sockets <integer> (1 - N) (default = 1)
--sockets <整数> (1 - N) (默认 = 1) -
The number of CPU sockets.
CPU 插槽的数量。 - --spice_enhancements [foldersharing=<1|0>] [,videostreaming=<off|all|filter>]
-
Configure additional enhancements for SPICE.
配置 SPICE 的额外增强功能。 - --sshkeys <string>
-
cloud-init: Setup public SSH keys (one key per line, OpenSSH format).
cloud-init:设置公钥 SSH 密钥(每行一个密钥,OpenSSH 格式)。 -
--startdate (now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS) (default = now)
--startdate(now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS)(默认 = now) -
Set the initial date of the real time clock. Valid format for date are:'now' or 2006-06-17T16:01:21 or 2006-06-17.
设置实时时钟的初始日期。有效的日期格式为:“now”或 2006-06-17T16:01:21 或 2006-06-17。 - --startup `[[order=]\d+] [,up=\d+] [,down=\d+] `
-
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。order 是一个非负数,定义了总体启动顺序。关闭时按相反顺序进行。此外,您可以设置 up 或 down 延迟(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的时间。 - --storage <storage ID> --storage <存储 ID>
-
Default storage. 默认存储。
-
--tablet <boolean> (default = 1)
--tablet <布尔值>(默认 = 1) -
Enable/disable the USB tablet device.
启用/禁用 USB 平板设备。 - --tags <string>
-
Tags of the VM. This is only meta information.
虚拟机的标签。这只是元信息。 -
--tdf <boolean> (default = 0)
--tdf <boolean> (默认 = 0) -
Enable/disable time drift fix.
启用/禁用时间漂移修正。 -
--template <boolean> (default = 0)
--template <boolean>(默认值 = 0) -
Enable/disable Template.
启用/禁用模板。 - --tpmstate0 [file=]<volume> [,size=<DiskSize>] [,version=<v1.2|v2.0>]
-
Configure a Disk for storing TPM state. The format is fixed to raw.
配置用于存储 TPM 状态的磁盘。格式固定为 raw。 - --unused[n] [file=]<volume>
-
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。这是内部使用的,不应手动修改。 - --usb[n] [[host=]<HOSTUSBDEVICE|spice>] [,mapping=<mapping-id>] [,usb3=<1|0>]
-
Configure an USB device (n is 0 to 4, for machine version >= 7.1 and ostype l26 or windows > 7, n can be up to 14).
配置一个 USB 设备(n 为 0 到 4,对于机器版本>=7.1 且操作系统类型为 l26 或 Windows > 7,n 最多可达 14)。 -
--vcpus <integer> (1 - N) (default = 0)
--vcpus <整数> (1 - N) (默认 = 0) -
Number of hotplugged vcpus.
热插拔 vcpus 的数量。 -
--vga [[type=]<enum>] [,clipboard=<vnc>] [,memory=<integer>]
--vga [[type=]<枚举>] [,clipboard=<vnc>] [,memory=<整数>] -
Configure the VGA hardware.
配置 VGA 硬件。 -
--virtio[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>]
--virtio[n] [file=]<卷> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<枚举>] -
Use volume as VIRTIO hard disk (n is 0 to 15).
使用卷作为 VIRTIO 硬盘(n 为 0 到 15)。 -
--virtiofs[n] [dirid=]<mapping-id> [,cache=<enum>] [,direct-io=<1|0>] [,expose-acl=<1|0>] [,expose-xattr=<1|0>]
--virtiofs[n] [dirid=]<映射 ID> [,cache=<枚举>] [,direct-io=<1|0>] [,expose-acl=<1|0>] [,expose-xattr=<1|0>] -
Configuration for sharing a directory between host and guest using Virtio-fs.
使用 Virtio-fs 配置主机与客户机之间共享目录。 -
--vmgenid <UUID> (default = 1 (autogenerated))
--vmgenid <UUID>(默认 = 1(自动生成)) -
Set VM Generation ID. Use 1 to autogenerate on create or update, pass 0 to disable explicitly.
设置虚拟机生成 ID。使用 1 表示在创建或更新时自动生成,传入 0 表示显式禁用。 -
--vmstatestorage <storage ID>
--vmstatestorage <存储 ID> -
Default storage for VM state volumes/files.
虚拟机状态卷/文件的默认存储。 - --watchdog [[model=]<i6300esb|ib700>] [,action=<enum>]
-
Create a virtual hardware watchdog device.
创建一个虚拟硬件看门狗设备。
qm importdisk
An alias for qm disk import.
qm disk import 的别名。
qm importovf <vmid> <manifest> <storage> [OPTIONS]
qm importovf <vmid> <manifest> <storage> [选项]
Create a new VM using parameters read from an OVF manifest
使用从 OVF 清单中读取的参数创建一个新的虚拟机
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <manifest>: <string>
-
path to the ovf file
ovf 文件的路径 - <storage>: <storage ID>
-
Target storage ID 目标存储 ID
- --dryrun <boolean> --dryrun <布尔值>
-
Print a parsed representation of the extracted OVF parameters, but do not create a VM
打印提取的 OVF 参数的解析表示,但不创建虚拟机 - --format <qcow2 | raw | vmdk>
-
Target format 目标格式
qm list [OPTIONS] qm list [选项]
Virtual machine index (per node).
虚拟机索引(每个节点)。
- --full <boolean> --full <布尔值>
-
Determine the full status of active VMs.
确定活动虚拟机的完整状态。
qm listsnapshot <vmid>
List all snapshots. 列出所有快照。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm migrate <vmid> <target> [OPTIONS]
qm migrate <vmid> <target> [选项]
Migrate virtual machine. Creates a new migration task.
迁移虚拟机。创建一个新的迁移任务。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <target>: <string>
-
Target node. 目标节点。
-
--bwlimit <integer> (0 - N) (default = migrate limit from datacenter or storage config)
--bwlimit <integer> (0 - N) (默认 = 来自数据中心或存储配置的迁移限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 - --force <boolean>
-
Allow to migrate VMs which use local devices. Only root may use this option.
允许迁移使用本地设备的虚拟机。只有 root 用户可以使用此选项。 - --migration_network <string>
-
CIDR of the (sub) network that is used for migration.
用于迁移的(子)网络的 CIDR。 - --migration_type <insecure | secure>
-
Migration traffic is encrypted using an SSH tunnel by default. On secure, completely private networks this can be disabled to increase performance.
迁移流量默认通过 SSH 隧道加密。在安全的完全私有网络中,可以禁用此功能以提高性能。 - --online <boolean>
-
Use online/live migration if VM is running. Ignored if VM is stopped.
如果虚拟机正在运行,则使用在线/实时迁移。如果虚拟机已停止,则忽略此选项。 - --targetstorage <string>
-
Mapping from source to target storages. Providing only a single storage ID maps all source storages to that storage. Providing the special value 1 will map each source storage to itself.
从源存储映射到目标存储。仅提供单个存储 ID 时,会将所有源存储映射到该存储。提供特殊值 1 时,会将每个源存储映射到其自身。 - --with-local-disks <boolean>
-
Enable live storage migration for local disk
启用本地磁盘的在线存储迁移
qm monitor <vmid>
Enter QEMU Monitor interface.
进入 QEMU 监控界面。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm move-disk
An alias for qm disk move.
qm disk move 的别名。
qm move_disk
An alias for qm disk move.
qm disk move 的别名。
qm mtunnel
Used by qmigrate - do not use manually.
由 qmigrate 使用 - 请勿手动使用。
qm nbdstop <vmid>
Stop embedded nbd server.
停止嵌入的 nbd 服务器。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm pending <vmid>
Get the virtual machine configuration with both current and pending values.
获取虚拟机的配置,包括当前值和待定值。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm reboot <vmid> [OPTIONS]
qm reboot <vmid> [选项]
Reboot the VM by shutting it down, and starting it again. Applies pending
changes.
通过关闭虚拟机并重新启动来重启虚拟机。应用待处理的更改。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--timeout <integer> (0 - N)
--timeout <整数> (0 - N) -
Wait maximal timeout seconds for the shutdown.
等待最多 timeout 秒进行关机。
qm remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]
qm remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [选项]
Migrate virtual machine to a remote cluster. Creates a new migration task.
EXPERIMENTAL feature!
将虚拟机迁移到远程集群。创建一个新的迁移任务。实验性功能!
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<target-vmid>: <integer> (100 - 999999999)
<target-vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<target-endpoint>: apitoken=<PVEAPIToken=user@realm!token=SECRET> ,host=<ADDRESS> [,fingerprint=<FINGERPRINT>] [,port=<PORT>]
<target-endpoint>: apitoken=<PVEAPIToken=user@realm!token=SECRET> ,host=<地址> [,fingerprint=<指纹>] [,port=<端口>] -
Remote target endpoint 远程目标端点
-
--bwlimit <integer> (0 - N) (default = migrate limit from datacenter or storage config)
--bwlimit <整数> (0 - N)(默认 = 来自数据中心或存储配置的迁移限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 -
--delete <boolean> (default = 0)
--delete <布尔值>(默认 = 0) -
Delete the original VM and related data after successful migration. By default the original VM is kept on the source cluster in a stopped state.
迁移成功后删除原始虚拟机及相关数据。默认情况下,原始虚拟机以停止状态保留在源集群中。 - --online <boolean>
-
Use online/live migration if VM is running. Ignored if VM is stopped.
如果虚拟机正在运行,则使用在线/实时迁移。如果虚拟机已停止,则忽略此选项。 - --target-bridge <string>
-
Mapping from source to target bridges. Providing only a single bridge ID maps all source bridges to that bridge. Providing the special value 1 will map each source bridge to itself.
从源桥接到目标桥接的映射。仅提供单个桥接 ID 时,会将所有源桥接映射到该桥接。提供特殊值 1 时,会将每个源桥接映射到其自身。 - --target-storage <string>
-
Mapping from source to target storages. Providing only a single storage ID maps all source storages to that storage. Providing the special value 1 will map each source storage to itself.
从源存储到目标存储的映射。仅提供单个存储 ID 时,会将所有源存储映射到该存储。提供特殊值 1 时,会将每个源存储映射到其自身。
qm rescan
An alias for qm disk rescan.
qm disk rescan 的别名。
qm reset <vmid> [OPTIONS]
qm reset <vmid> [选项]
Reset virtual machine. 重置虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定 - 只有 root 用户被允许使用此选项。
qm resize
An alias for qm disk resize.
qm disk resize 的别名。
qm resume <vmid> [OPTIONS]
qm resume <vmid> [选项]
Resume virtual machine. 恢复虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --nocheck <boolean>
-
no description available
无可用描述 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定 - 仅允许 root 用户使用此选项。
qm rollback <vmid> <snapname> [OPTIONS]
qm rollback <vmid> <snapname> [选项]
Rollback VM state to specified snapshot.
将虚拟机状态回滚到指定的快照。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <snapname>: <string>
-
The name of the snapshot.
快照的名称。 -
--start <boolean> (default = 0)
--start <boolean>(默认 = 0) -
Whether the VM should get started after rolling back successfully. (Note: VMs will be automatically started if the snapshot includes RAM.)
回滚成功后是否应启动虚拟机。(注意:如果快照包含内存,虚拟机将自动启动。)
qm sendkey <vmid> <key> [OPTIONS]
qm sendkey <vmid> <key> [选项]
Send key event to virtual machine.
向虚拟机发送按键事件。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <key>: <string>
-
The key (qemu monitor encoding).
键(qemu 监视器编码)。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定——只有 root 用户被允许使用此选项。
qm set <vmid> [OPTIONS] qm set <vmid> [选项]
Set virtual machine options (synchronous API) - You should consider using
the POST method instead for any actions involving hotplug or storage
allocation.
设置虚拟机选项(同步 API)- 对于涉及热插拔或存储分配的任何操作,您应考虑改用 POST 方法。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--acpi <boolean> (default = 1)
--acpi <boolean>(默认 = 1) -
Enable/disable ACPI. 启用/禁用 ACPI。
- --affinity <string>
-
List of host cores used to execute guest processes, for example: 0,5,8-11
用于执行客户机进程的主机核心列表,例如:0,5,8-11 - --agent [enabled=]<1|0> [,freeze-fs-on-backup=<1|0>] [,fstrim_cloned_disks=<1|0>] [,type=<virtio|isa>]
-
Enable/disable communication with the QEMU Guest Agent and its properties.
启用/禁用与 QEMU 客户机代理的通信及其属性。 - --amd-sev [type=]<sev-type> [,allow-smt=<1|0>] [,kernel-hashes=<1|0>] [,no-debug=<1|0>] [,no-key-sharing=<1|0>]
-
Secure Encrypted Virtualization (SEV) features by AMD CPUs
AMD CPU 的安全加密虚拟化(SEV)功能 - --arch <aarch64 | x86_64>
-
Virtual processor architecture. Defaults to the host.
虚拟处理器架构。默认为主机架构。 - --args <string>
-
Arbitrary arguments passed to kvm.
传递给 kvm 的任意参数。 -
--audio0 device=<ich9-intel-hda|intel-hda|AC97> [,driver=<spice|none>]
--audio0 设备=<ich9-intel-hda|intel-hda|AC97> [,驱动=<spice|none>] -
Configure a audio device, useful in combination with QXL/Spice.
配置音频设备,适用于与 QXL/Spice 结合使用。 -
--autostart <boolean> (default = 0)
--autostart <布尔值>(默认 = 0) -
Automatic restart after crash (currently ignored).
崩溃后自动重启(当前忽略)。 -
--balloon <integer> (0 - N)
--balloon <整数> (0 - N) -
Amount of target RAM for the VM in MiB. Using zero disables the ballon driver.
虚拟机目标内存大小,单位为 MiB。设置为零则禁用气球驱动。 -
--bios <ovmf | seabios> (default = seabios)
--bios <ovmf | seabios> (默认 = seabios) -
Select BIOS implementation.
选择 BIOS 实现方式。 - --boot [[legacy=]<[acdn]{1,4}>] [,order=<device[;device...]>]
-
Specify guest boot order. Use the order= sub-property as usage with no key or legacy= is deprecated.
指定客户机启动顺序。使用 order= 子属性,直接使用无键或 legacy= 的方式已被弃用。 - --bootdisk (ide|sata|scsi|virtio)\d+
-
Enable booting from specified disk. Deprecated: Use boot: order=foo;bar instead.
启用从指定磁盘启动。已弃用:请改用 boot: order=foo;bar。 - --cdrom <volume>
-
This is an alias for option -ide2
这是选项 -ide2 的别名 - --cicustom [meta=<volume>] [,network=<volume>] [,user=<volume>] [,vendor=<volume>]
-
cloud-init: Specify custom files to replace the automatically generated ones at start.
cloud-init:指定自定义文件以替换启动时自动生成的文件。 - --cipassword <password> --cipassword <密码>
-
cloud-init: Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords.
cloud-init:分配给用户的密码。通常不建议使用此方法。建议使用 ssh 密钥。此外,请注意较旧版本的 cloud-init 不支持哈希密码。 - --citype <configdrive2 | nocloud | opennebula>
-
Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows.
指定 cloud-init 的配置格式。默认值取决于配置的操作系统类型(ostype)。我们对 Linux 使用 nocloud 格式,对 Windows 使用 configdrive2 格式。 -
--ciupgrade <boolean> (default = 1)
--ciupgrade <布尔值>(默认 = 1) -
cloud-init: do an automatic package upgrade after the first boot.
cloud-init:首次启动后自动进行包升级。 - --ciuser <string> --ciuser <字符串>
-
cloud-init: User name to change ssh keys and password for instead of the image’s configured default user.
cloud-init:用于更改 SSH 密钥和密码的用户名,替代镜像中配置的默认用户。 -
--cores <integer> (1 - N) (default = 1)
--cores <整数> (1 - N) (默认 = 1) -
The number of cores per socket.
每个插槽的核心数。 -
--cpu [[cputype=]<string>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<enum>]
--cpu [[cputype=]<字符串>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<枚举>] -
Emulated CPU type. 模拟的 CPU 类型。
-
--cpulimit <number> (0 - 128) (default = 0)
--cpulimit <数字> (0 - 128) (默认 = 0) -
Limit of CPU usage.
CPU 使用限制。 -
--cpuunits <integer> (1 - 262144) (default = cgroup v1: 1024, cgroup v2: 100)
--cpuunits <整数> (1 - 262144) (默认 = cgroup v1: 1024, cgroup v2: 100) -
CPU weight for a VM, will be clamped to [1, 10000] in cgroup v2.
虚拟机的 CPU 权重,在 cgroup v2 中会限制在 [1, 10000] 范围内。 - --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --description <string>
-
Description for the VM. Shown in the web-interface VM’s summary. This is saved as comment inside the configuration file.
虚拟机的描述。在网页界面虚拟机摘要中显示。此内容作为注释保存在配置文件中。 - --digest <string>
-
Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.
如果当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于防止并发修改。 - --efidisk0 [file=]<volume> [,efitype=<2m|4m>] [,format=<enum>] [,import-from=<source volume>] [,pre-enrolled-keys=<1|0>] [,size=<DiskSize>]
-
Configure a disk for storing EFI vars. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Note that SIZE_IN_GiB is ignored here and that the default EFI vars are copied to the volume instead. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
配置用于存储 EFI 变量的磁盘。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。请注意,这里会忽略 SIZE_IN_GiB,默认的 EFI 变量会被复制到该卷。使用 STORAGE_ID:0 和 import-from 参数可从现有卷导入。 - --force <boolean>
-
Force physical removal. Without this, we simple remove the disk from the config file and create an additional configuration entry called unused[n], which contains the volume ID. Unlink of unused[n] always cause physical removal.
强制物理移除。没有此选项时,我们仅从配置文件中移除磁盘,并创建一个名为 unused[n] 的额外配置条目,其中包含卷 ID。取消链接 unused[n] 总是会导致物理移除。Requires option(s): delete
需要选项:delete - --freeze <boolean>
-
Freeze CPU at startup (use c monitor command to start execution).
启动时冻结 CPU(使用 c monitor 命令开始执行)。 - --hookscript <string>
-
Script that will be executed during various steps in the vms lifetime.
将在虚拟机生命周期的各个步骤中执行的脚本。 - --hostpci[n] [[host=]<HOSTPCIID[;HOSTPCIID2...]>] [,device-id=<hex id>] [,legacy-igd=<1|0>] [,mapping=<mapping-id>] [,mdev=<string>] [,pcie=<1|0>] [,rombar=<1|0>] [,romfile=<string>] [,sub-device-id=<hex id>] [,sub-vendor-id=<hex id>] [,vendor-id=<hex id>] [,x-vga=<1|0>]
-
Map host PCI devices into guest.
将主机 PCI 设备映射到客户机。 -
--hotplug <string> (default = network,disk,usb)
--hotplug <string>(默认 = network,disk,usb) -
Selectively enable hotplug features. This is a comma separated list of hotplug features: network, disk, cpu, memory, usb and cloudinit. Use 0 to disable hotplug completely. Using 1 as value is an alias for the default network,disk,usb. USB hotplugging is possible for guests with machine version >= 7.1 and ostype l26 or windows > 7.
选择性启用热插拔功能。这是一个以逗号分隔的热插拔功能列表:network、disk、cpu、memory、usb 和 cloudinit。使用 0 表示完全禁用热插拔。使用 1 作为值是默认 network,disk,usb 的别名。对于机器版本 >= 7.1 且操作系统类型为 l26 或 Windows > 7 的客户机,支持 USB 热插拔。 - --hugepages <1024 | 2 | any>
-
Enable/disable hugepages memory.
启用/禁用大页内存。 -
--ide[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,model=<model>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
--ide[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,import-from=<源卷>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,model=<型号>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<枚举>] [,wwn=<wwn>] -
Use volume as IDE hard disk or CD-ROM (n is 0 to 3). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
将卷用作 IDE 硬盘或光驱(n 为 0 到 3)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--ipconfig[n] [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4Format/CIDR>] [,ip6=<IPv6Format/CIDR>]
--ipconfig[n] [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4 格式/CIDR>] [,ip6=<IPv6 格式/CIDR>] -
cloud-init: Specify IP addresses and gateways for the corresponding interface.
cloud-init:为相应的接口指定 IP 地址和网关。IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.
IP 地址使用 CIDR 表示法,网关是可选的,但需要指定相同类型的 IP。The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit gateway should be provided. For IPv6 the special string auto can be used to use stateless autoconfiguration. This requires cloud-init 19.4 or newer.
IP 地址可以使用特殊字符串 dhcp 来启用 DHCP,此时不应提供显式网关。对于 IPv6,可以使用特殊字符串 auto 来启用无状态自动配置。这需要 cloud-init 19.4 或更高版本。If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using dhcp on IPv4.
如果启用了 cloud-init 且未指定 IPv4 或 IPv6 地址,则默认使用 IPv4 的 dhcp。 -
--ivshmem size=<integer> [,name=<string>]
--ivshmem size=<整数> [,name=<字符串>] -
Inter-VM shared memory. Useful for direct communication between VMs, or to the host.
虚拟机间共享内存。适用于虚拟机之间或与主机之间的直接通信。 -
--keephugepages <boolean> (default = 0)
--keephugepages <布尔值>(默认 = 0) -
Use together with hugepages. If enabled, hugepages will not not be deleted after VM shutdown and can be used for subsequent starts.
与大页一起使用。如果启用,大页在虚拟机关闭后不会被删除,可以用于后续启动。 - --keyboard <da | de | de-ch | en-gb | en-us | es | fi | fr | fr-be | fr-ca | fr-ch | hu | is | it | ja | lt | mk | nl | no | pl | pt | pt-br | sl | sv | tr>
-
Keyboard layout for VNC server. This option is generally not required and is often better handled from within the guest OS.
VNC 服务器的键盘布局。此选项通常不需要,且通常更适合在客户操作系统内进行设置。 -
--kvm <boolean> (default = 1)
--kvm <boolean>(默认 = 1) -
Enable/disable KVM hardware virtualization.
启用/禁用 KVM 硬件虚拟化。 - --localtime <boolean> --localtime <布尔值>
-
Set the real time clock (RTC) to local time. This is enabled by default if the ostype indicates a Microsoft Windows OS.
将实时时钟(RTC)设置为本地时间。如果操作系统类型指示为 Microsoft Windows 操作系统,则默认启用此功能。 -
--lock <backup | clone | create | migrate | rollback | snapshot | snapshot-delete | suspended | suspending>
--lock <备份 | 克隆 | 创建 | 迁移 | 回滚 | 快照 | 快照删除 | 已挂起 | 正在挂起> -
Lock/unlock the VM. 锁定/解锁虚拟机。
-
--machine [[type=]<machine type>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>]
--machine [[type=]<机器类型>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>] -
Specify the QEMU machine.
指定 QEMU 机器。 -
--memory [current=]<integer>
--memory [current=]<整数> -
Memory properties. 内存属性。
-
--migrate_downtime <number> (0 - N) (default = 0.1)
--migrate_downtime <数字> (0 - N)(默认 = 0.1) -
Set maximum tolerated downtime (in seconds) for migrations. Should the migration not be able to converge in the very end, because too much newly dirtied RAM needs to be transferred, the limit will be increased automatically step-by-step until migration can converge.
设置迁移时最大允许的停机时间(秒)。如果迁移在最后阶段无法收敛,因为需要传输过多新脏的内存,限制将自动逐步增加,直到迁移能够收敛。 -
--migrate_speed <integer> (0 - N) (default = 0)
--migrate_speed <整数> (0 - N)(默认 = 0) -
Set maximum speed (in MB/s) for migrations. Value 0 is no limit.
设置迁移的最大速度(以 MB/s 为单位)。值为 0 表示无限制。 - --name <string>
-
Set a name for the VM. Only used on the configuration web interface.
为虚拟机设置名称。仅在配置网页界面中使用。 - --nameserver <string>
-
cloud-init: Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 服务器 IP 地址。如果未设置 searchdomain 和 nameserver,创建时将自动使用主机的设置。 -
--net[n] [model=]<enum> [,bridge=<bridge>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,mtu=<integer>] [,queues=<integer>] [,rate=<number>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>]
--net[n] [model=]<枚举> [,bridge=<桥接>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,mtu=<整数>] [,queues=<整数>] [,rate=<数字>] [,tag=<整数>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>] -
Specify network devices.
指定网络设备。 -
--numa <boolean> (default = 0)
--numa <布尔值>(默认 = 0) -
Enable/disable NUMA. 启用/禁用 NUMA。
-
--numa[n] cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>]
--numa[n] cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<数字>] [,policy=<preferred|bind|interleave>] -
NUMA topology. NUMA 拓扑。
-
--onboot <boolean> (default = 0)
--onboot <布尔值>(默认 = 0) -
Specifies whether a VM will be started during system bootup.
指定虚拟机是否在系统启动时启动。 - --ostype <l24 | l26 | other | solaris | w2k | w2k3 | w2k8 | win10 | win11 | win7 | win8 | wvista | wxp>
-
Specify guest operating system.
指定客户操作系统。 - --parallel[n] /dev/parport\d+|/dev/usb/lp\d+
-
Map host parallel devices (n is 0 to 2).
映射主机并行设备(n 为 0 到 2)。 -
--protection <boolean> (default = 0)
--protection <boolean>(默认值 = 0) -
Sets the protection flag of the VM. This will disable the remove VM and remove disk operations.
设置虚拟机的保护标志。这将禁用删除虚拟机和删除磁盘操作。 -
--reboot <boolean> (default = 1)
--reboot <boolean>(默认值 = 1) -
Allow reboot. If set to 0 the VM exit on reboot.
允许重启。如果设置为 0,虚拟机在重启时退出。 - --revert <string>
-
Revert a pending change.
还原一个待处理的更改。 - --rng0 [source=]</dev/urandom|/dev/random|/dev/hwrng> [,max_bytes=<integer>] [,period=<integer>]
-
Configure a VirtIO-based Random Number Generator.
配置基于 VirtIO 的随机数生成器。 - --sata[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
-
Use volume as SATA hard disk or CD-ROM (n is 0 to 5). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
将卷用作 SATA 硬盘或 CD-ROM(n 范围为 0 到 5)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--scsi[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,product=<product>] [,queues=<integer>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,vendor=<vendor>] [,werror=<enum>] [,wwn=<wwn>]
--scsi[n] [file=]<卷> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,import-from=<源卷>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,product=<产品>] [,queues=<整数>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,vendor=<厂商>] [,werror=<枚举>] [,wwn=<wwn>] -
Use volume as SCSI hard disk or CD-ROM (n is 0 to 30). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
将卷用作 SCSI 硬盘或 CD-ROM(n 的取值范围为 0 到 30)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配一个新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--scsihw <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single> (default = lsi)
--scsihw <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single>(默认 = lsi) -
SCSI controller model SCSI 控制器型号
- --searchdomain <string> --searchdomain <字符串>
-
cloud-init: Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.
cloud-init:为容器设置 DNS 搜索域。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 - --serial[n] (/dev/.+|socket)
-
Create a serial device inside the VM (n is 0 to 3)
在虚拟机内创建一个串行设备(n 为 0 到 3) -
--shares <integer> (0 - 50000) (default = 1000)
--shares <整数>(0 - 50000)(默认值 = 1000) -
Amount of memory shares for auto-ballooning. The larger the number is, the more memory this VM gets. Number is relative to weights of all other running VMs. Using zero disables auto-ballooning. Auto-ballooning is done by pvestatd.
自动气球内存份额的数量。数字越大,该虚拟机获得的内存越多。该数字相对于所有其他正在运行的虚拟机的权重。使用零将禁用自动气球。自动气球由 pvestatd 执行。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定——只有 root 用户被允许使用此选项。 -
--smbios1 [base64=<1|0>] [,family=<Base64 encoded string>] [,manufacturer=<Base64 encoded string>] [,product=<Base64 encoded string>] [,serial=<Base64 encoded string>] [,sku=<Base64 encoded string>] [,uuid=<UUID>] [,version=<Base64 encoded string>]
--smbios1 [base64=<1|0>] [,family=<Base64 编码字符串>] [,manufacturer=<Base64 编码字符串>] [,product=<Base64 编码字符串>] [,serial=<Base64 编码字符串>] [,sku=<Base64 编码字符串>] [,uuid=<UUID>] [,version=<Base64 编码字符串>] -
Specify SMBIOS type 1 fields.
指定 SMBIOS 类型 1 字段。 -
--smp <integer> (1 - N) (default = 1)
--smp <整数> (1 - N)(默认 = 1) -
The number of CPUs. Please use option -sockets instead.
CPU 数量。请改用选项 -sockets。 -
--sockets <integer> (1 - N) (default = 1)
--sockets <整数> (1 - N)(默认 = 1) -
The number of CPU sockets.
CPU 插槽数量。 - --spice_enhancements [foldersharing=<1|0>] [,videostreaming=<off|all|filter>]
-
Configure additional enhancements for SPICE.
配置 SPICE 的额外增强功能。 - --sshkeys <filepath>
-
cloud-init: Setup public SSH keys (one key per line, OpenSSH format).
cloud-init:设置公共 SSH 密钥(每行一个密钥,OpenSSH 格式)。 -
--startdate (now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS) (default = now)
--startdate(now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS)(默认值 = now) -
Set the initial date of the real time clock. Valid format for date are:'now' or 2006-06-17T16:01:21 or 2006-06-17.
设置实时时钟的初始日期。有效的日期格式为:“now”或 2006-06-17T16:01:21 或 2006-06-17。 - --startup `[[order=]\d+] [,up=\d+] [,down=\d+] `
-
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。顺序是一个非负数,定义了一般的启动顺序。关闭时按相反顺序进行。此外,您可以设置启动或关闭的延迟时间(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的时间。 -
--tablet <boolean> (default = 1)
--tablet <boolean>(默认值 = 1) -
Enable/disable the USB tablet device.
启用/禁用 USB 平板设备。 - --tags <string>
-
Tags of the VM. This is only meta information.
虚拟机的标签。这只是元信息。 -
--tdf <boolean> (default = 0)
--tdf <布尔值>(默认 = 0) -
Enable/disable time drift fix.
启用/禁用时间漂移修正。 -
--template <boolean> (default = 0)
--template <布尔值>(默认 = 0) -
Enable/disable Template.
启用/禁用模板。 -
--tpmstate0 [file=]<volume> [,import-from=<source volume>] [,size=<DiskSize>] [,version=<v1.2|v2.0>]
--tpmstate0 [file=]<卷> [,import-from=<源卷>] [,size=<磁盘大小>] [,version=<v1.2|v2.0>] -
Configure a Disk for storing TPM state. The format is fixed to raw. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Note that SIZE_IN_GiB is ignored here and 4 MiB will be used instead. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
配置用于存储 TPM 状态的磁盘。格式固定为 raw。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。请注意,这里会忽略 SIZE_IN_GiB,改用 4 MiB。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 -
--unused[n] [file=]<volume>
--unused[n] [file=]<卷> -
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。此项用于内部,不应手动修改。 - --usb[n] [[host=]<HOSTUSBDEVICE|spice>] [,mapping=<mapping-id>] [,usb3=<1|0>]
-
Configure an USB device (n is 0 to 4, for machine version >= 7.1 and ostype l26 or windows > 7, n can be up to 14).
配置一个 USB 设备(n 为 0 到 4,对于机器版本>=7.1 且操作系统类型为 l26 或 Windows > 7,n 最多可达 14)。 -
--vcpus <integer> (1 - N) (default = 0)
--vcpus <整数>(1 - N)(默认 = 0) -
Number of hotplugged vcpus.
热插拔的虚拟 CPU 数量。 -
--vga [[type=]<enum>] [,clipboard=<vnc>] [,memory=<integer>]
--vga [[type=]<枚举>] [,clipboard=<vnc>] [,memory=<整数>] -
Configure the VGA hardware.
配置 VGA 硬件。 -
--virtio[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>]
--virtio[n] [file=]<卷> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<秒>] [,bps_rd=<bps>] [,bps_rd_max_length=<秒>] [,bps_wr=<bps>] [,bps_wr_max_length=<秒>] [,cache=<枚举>] [,cyls=<整数>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<枚举>] [,heads=<整数>] [,import-from=<源卷>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<秒>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<秒>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<秒>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<整数>] [,serial=<序列号>] [,shared=<1|0>] [,size=<磁盘大小>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<枚举>] -
Use volume as VIRTIO hard disk (n is 0 to 15). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.
将卷用作 VIRTIO 硬盘(n 为 0 到 15)。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。使用 STORAGE_ID:0 和 import-from 参数从现有卷导入。 - --virtiofs[n] [dirid=]<mapping-id> [,cache=<enum>] [,direct-io=<1|0>] [,expose-acl=<1|0>] [,expose-xattr=<1|0>]
-
Configuration for sharing a directory between host and guest using Virtio-fs.
使用 Virtio-fs 在主机和客户机之间共享目录的配置。 -
--vmgenid <UUID> (default = 1 (autogenerated))
--vmgenid <UUID>(默认 = 1(自动生成)) -
Set VM Generation ID. Use 1 to autogenerate on create or update, pass 0 to disable explicitly.
设置虚拟机生成 ID。使用 1 表示在创建或更新时自动生成,传入 0 表示显式禁用。 -
--vmstatestorage <storage ID>
--vmstatestorage <存储 ID> -
Default storage for VM state volumes/files.
虚拟机状态卷/文件的默认存储。 -
--watchdog [[model=]<i6300esb|ib700>] [,action=<enum>]
--watchdog [[model=]<i6300esb|ib700>] [,action=<枚举>] -
Create a virtual hardware watchdog device.
创建一个虚拟硬件看门狗设备。
qm showcmd <vmid> [OPTIONS]
qm showcmd <vmid> [选项]
Show command line which is used to start the VM (debug info).
显示用于启动虚拟机的命令行(调试信息)。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--pretty <boolean> (default = 0)
--pretty <boolean>(默认 = 0) -
Puts each option on a new line to enhance human readability
将每个选项放在新的一行,以增强可读性 - --snapshot <string>
-
Fetch config values from given snapshot.
从给定的快照中获取配置值。
qm shutdown <vmid> [OPTIONS]
qm shutdown <vmid> [选项]
Shutdown virtual machine. This is similar to pressing the power button on a
physical machine. This will send an ACPI event for the guest OS, which
should then proceed to a clean shutdown.
关闭虚拟机。这类似于按下物理机器的电源按钮。此操作会向客户操作系统发送一个 ACPI 事件,客户操作系统应随后进行正常关机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--forceStop <boolean> (default = 0)
--forceStop <boolean>(默认值 = 0) -
Make sure the VM stops.
确保虚拟机停止。 -
--keepActive <boolean> (default = 0)
--keepActive <boolean>(默认值 = 0) -
Do not deactivate storage volumes.
不要停用存储卷。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定 - 仅允许 root 用户使用此选项。 - --timeout <integer> (0 - N)
-
Wait maximal timeout seconds.
等待最大超时时间(秒)。
qm snapshot <vmid> <snapname> [OPTIONS]
qm snapshot <vmid> <snapname> [选项]
Snapshot a VM. 为虚拟机创建快照。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <snapname>: <string> <snapname>:<字符串>
-
The name of the snapshot.
快照的名称。 - --description <string> --description <字符串>
-
A textual description or comment.
文本描述或注释。 - --vmstate <boolean> --vmstate <布尔值>
-
Save the vmstate 保存虚拟机状态
qm start <vmid> [OPTIONS]
qm start <vmid> [选项]
Start virtual machine. 启动虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --force-cpu <string> --force-cpu <字符串>
-
Override QEMU’s -cpu argument with the given string.
用给定的字符串覆盖 QEMU 的 -cpu 参数。 -
--machine [[type=]<machine type>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>]
--machine [[type=]<机器类型>] [,enable-s3=<1|0>] [,enable-s4=<1|0>] [,viommu=<intel|virtio>] -
Specify the QEMU machine.
指定 QEMU 机器。 - --migratedfrom <string> --migratedfrom <字符串>
-
The cluster node name.
集群节点名称。 -
--migration_network <string>
--migration_network <字符串> -
CIDR of the (sub) network that is used for migration.
用于迁移的(子)网络的 CIDR。 -
--migration_type <insecure | secure>
--migration_type <不安全 | 安全> -
Migration traffic is encrypted using an SSH tunnel by default. On secure, completely private networks this can be disabled to increase performance.
迁移流量默认通过 SSH 隧道加密。在安全的完全私有网络中,可以禁用此功能以提高性能。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定 - 仅允许 root 用户使用此选项。 - --stateuri <string>
-
Some command save/restore state from this location.
有些命令会从此位置保存/恢复状态。 -
--targetstorage <string>
--targetstorage <字符串> -
Mapping from source to target storages. Providing only a single storage ID maps all source storages to that storage. Providing the special value 1 will map each source storage to itself.
从源存储到目标存储的映射。仅提供单个存储 ID 时,会将所有源存储映射到该存储。提供特殊值 1 时,会将每个源存储映射到其自身。 -
--timeout <integer> (0 - N) (default = max(30, vm memory in GiB))
--timeout <整数> (0 - N) (默认 = max(30, 虚拟机内存大小,单位 GiB)) -
Wait maximal timeout seconds.
等待最大超时时间(秒)。
qm status <vmid> [OPTIONS]
qm status <vmid> [选项]
Show VM status. 显示虚拟机状态。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --verbose <boolean>
-
Verbose output format 详细输出格式
qm stop <vmid> [OPTIONS] qm stop <vmid> [选项]
Stop virtual machine. The qemu process will exit immediately. This is akin
to pulling the power plug of a running computer and may damage the VM data.
停止虚拟机。qemu 进程将立即退出。这类似于拔掉正在运行的计算机的电源插头,可能会损坏虚拟机数据。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--keepActive <boolean> (default = 0)
--keepActive <布尔值>(默认 = 0) -
Do not deactivate storage volumes.
不要停用存储卷。 - --migratedfrom <string>
-
The cluster node name.
集群节点名称。 -
--overrule-shutdown <boolean> (default = 0)
--overrule-shutdown <boolean>(默认值 = 0) -
Try to abort active qmshutdown tasks before stopping.
尝试在停止之前中止活动的 qmshutdown 任务。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定 - 仅允许 root 使用此选项。 - --timeout <integer> (0 - N)
-
Wait maximal timeout seconds.
等待最大超时时间(秒)。
qm suspend <vmid> [OPTIONS]
qm suspend <vmid> [选项]
Suspend virtual machine. 挂起虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --skiplock <boolean> --skiplock <布尔值>
-
Ignore locks - only root is allowed to use this option.
忽略锁定——只有 root 用户被允许使用此选项。 -
--statestorage <storage ID>
--statestorage <存储 ID> -
The storage for the VM state
虚拟机状态的存储Requires option(s): todisk
需要选项:todisk -
--todisk <boolean> (default = 0)
--todisk <布尔值>(默认 = 0) -
If set, suspends the VM to disk. Will be resumed on next VM start.
如果设置,将虚拟机挂起到磁盘。下次启动虚拟机时将恢复。
qm template <vmid> [OPTIONS]
qm template <vmid> [选项]
Create a Template. 创建一个模板。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --disk <efidisk0 | ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 | sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 | scsi13 | scsi14 | scsi15 | scsi16 | scsi17 | scsi18 | scsi19 | scsi2 | scsi20 | scsi21 | scsi22 | scsi23 | scsi24 | scsi25 | scsi26 | scsi27 | scsi28 | scsi29 | scsi3 | scsi30 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8 | scsi9 | tpmstate0 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 | virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 | virtio5 | virtio6 | virtio7 | virtio8 | virtio9>
-
If you want to convert only 1 disk to base image.
如果您只想将一个磁盘转换为基础镜像。
qm terminal <vmid> [OPTIONS]
qm 终端 <vmid> [选项]
Open a terminal using a serial device (The VM need to have a serial device
configured, for example serial0: socket)
使用串行设备打开终端(虚拟机需要配置串行设备,例如 serial0: socket)
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--escape <string> (default = ^O)
--escape <字符串> (默认 = ^O) -
Escape character. 转义字符。
- --iface <serial0 | serial1 | serial2 | serial3>
-
Select the serial device. By default we simply use the first suitable device.
选择串口设备。默认情况下,我们仅使用第一个合适的设备。
qm unlink
An alias for qm disk unlink.
qm disk unlink 的别名。
qm unlock <vmid>
Unlock the VM. 解锁虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm vncproxy <vmid>
Proxy VM VNC traffic to stdin/stdout
将虚拟机的 VNC 流量代理到标准输入/输出
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
qm wait <vmid> [OPTIONS] qm wait <vmid> [选项]
Wait until the VM is stopped.
等待直到虚拟机停止。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--timeout <integer> (1 - N)
--timeout <整数> (1 - N) -
Timeout in seconds. Default is to wait forever.
超时时间,单位为秒。默认是无限等待。
22.10. qmrestore - Restore QemuServer vzdump Backups
22.10. qmrestore - 恢复 QemuServer vzdump 备份
qmrestore help qmrestore 帮助
qmrestore <archive> <vmid> [OPTIONS]
qmrestore <archive> <vmid> [选项]
Restore QemuServer vzdump backups.
恢复 QemuServer vzdump 备份。
- <archive>: <string> <archive>: <字符串>
-
The backup file. You can pass - to read from standard input.
备份文件。你可以传入 - 从标准输入读取。 -
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--bwlimit <number> (0 - N)
--bwlimit <数字> (0 - N) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 - --force <boolean>
-
Allow to overwrite existing VM.
允许覆盖现有虚拟机。 - --live-restore <boolean>
-
Start the VM immediately from the backup and restore in background. PBS only.
立即从备份启动虚拟机,并在后台恢复。仅限 PBS。 - --pool <string>
-
Add the VM to the specified pool.
将虚拟机添加到指定的资源池。 - --storage <storage ID>
-
Default storage. 默认存储。
- --unique <boolean>
-
Assign a unique random ethernet address.
分配一个唯一的随机以太网地址。
22.11. pct - Proxmox Container Toolkit
22.11. pct - Proxmox 容器工具包
pct <COMMAND> [ARGS] [OPTIONS]
pct <命令> [参数] [选项]
pct clone <vmid> <newid> [OPTIONS]
pct clone <vmid> <newid> [选项]
Create a container clone/copy
创建容器克隆/复制
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<newid>: <integer> (100 - 999999999)
<newid>: <整数> (100 - 999999999) -
VMID for the clone.
克隆的 VMID。 -
--bwlimit <number> (0 - N) (default = clone limit from datacenter or storage config)
--bwlimit <数字> (0 - N) (默认 = 来自数据中心或存储配置的克隆限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 - --description <string>
-
Description for the new CT.
新 CT 的描述。 - --full <boolean>
-
Create a full copy of all disks. This is always done when you clone a normal CT. For CT templates, we try to create a linked clone by default.
创建所有磁盘的完整副本。克隆普通 CT 时总是执行此操作。对于 CT 模板,我们默认尝试创建一个链接克隆。 - --hostname <string>
-
Set a hostname for the new CT.
为新的 CT 设置主机名。 - --pool <string>
-
Add the new CT to the specified pool.
将新的 CT 添加到指定的资源池中。 - --snapname <string>
-
The name of the snapshot.
快照的名称。 - --storage <storage ID>
-
Target storage for full clone.
完整克隆的目标存储。 - --target <string>
-
Target node. Only allowed if the original VM is on shared storage.
目标节点。仅当原始虚拟机位于共享存储上时允许使用。
pct config <vmid> [OPTIONS]
Get container configuration.
获取容器配置。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--current <boolean> (default = 0)
--current <布尔值>(默认 = 0) -
Get current values (instead of pending values).
获取当前值(而非待定值)。 - --snapshot <string>
-
Fetch config values from given snapshot.
从指定的快照中获取配置值。
pct console <vmid> [OPTIONS]
pct console <vmid> [选项]
Launch a console for the specified container.
为指定的容器启动控制台。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--escape \^?[a-z] (default = ^a)
--escape \^?[a-z](默认 = ^a) -
Escape sequence prefix. For example to use <Ctrl+b q> as the escape sequence pass ^b.
转义序列前缀。例如,要使用 <Ctrl+b q> 作为转义序列,则传递 ^b。
pct cpusets
Print the list of assigned CPU sets.
打印已分配的 CPU 集合列表。
pct create <vmid> <ostemplate> [OPTIONS]
Create or restore a container.
创建或恢复一个容器。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <ostemplate>: <string> <ostemplate>: <字符串>
-
The OS template or backup file.
操作系统模板或备份文件。 -
--arch <amd64 | arm64 | armhf | i386 | riscv32 | riscv64> (default = amd64)
--arch <amd64 | arm64 | armhf | i386 | riscv32 | riscv64>(默认 = amd64) -
OS architecture type. 操作系统架构类型。
-
--bwlimit <number> (0 - N) (default = restore limit from datacenter or storage config)
--bwlimit <数字>(0 - N)(默认 = 从数据中心或存储配置恢复限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 -
--cmode <console | shell | tty> (default = tty)
--cmode <console | shell | tty>(默认 = tty) -
Console mode. By default, the console command tries to open a connection to one of the available tty devices. By setting cmode to console it tries to attach to /dev/console instead. If you set cmode to shell, it simply invokes a shell inside the container (no login).
控制台模式。默认情况下,console 命令尝试连接到可用的 tty 设备之一。通过将 cmode 设置为 console,它会尝试连接到 /dev/console。若将 cmode 设置为 shell,则会在容器内直接调用一个 shell(无登录)。 -
--console <boolean> (default = 1)
--console <boolean>(默认 = 1) -
Attach a console device (/dev/console) to the container.
将控制台设备(/dev/console)连接到容器。 -
--cores <integer> (1 - 8192)
--cores <整数> (1 - 8192) -
The number of cores assigned to the container. A container can use all available cores by default.
分配给容器的核心数。容器默认可以使用所有可用核心。 -
--cpulimit <number> (0 - 8192) (default = 0)
--cpulimit <数字> (0 - 8192) (默认 = 0) -
Limit of CPU usage.
CPU 使用限制。If the computer has 2 CPUs, it has a total of 2 CPU time. Value 0 indicates no CPU limit.
如果计算机有 2 个 CPU,则总共有 2 个 CPU 时间。值 0 表示没有 CPU 限制。 -
--cpuunits <integer> (0 - 500000) (default = cgroup v1: 1024, cgroup v2: 100)
--cpuunits <整数> (0 - 500000)(默认值 = cgroup v1: 1024,cgroup v2: 100) -
CPU weight for a container, will be clamped to [1, 10000] in cgroup v2.
容器的 CPU 权重,在 cgroup v2 中将限制在[1, 10000]范围内。 -
--debug <boolean> (default = 0)
--debug <布尔值>(默认值 = 0) -
Try to be more verbose. For now this only enables debug log-level on start.
尝试更详细一些。目前这仅在启动时启用调试日志级别。 - --description <string>
-
Description for the Container. Shown in the web-interface CT’s summary. This is saved as comment inside the configuration file.
容器的描述。在网页界面 CT 摘要中显示。此内容作为注释保存在配置文件中。 -
--dev[n] [[path=]<Path>] [,deny-write=<1|0>] [,gid=<integer>] [,mode=<Octal access mode>] [,uid=<integer>]
--dev[n] [[path=]<路径>] [,deny-write=<1|0>] [,gid=<整数>] [,mode=<八进制访问模式>] [,uid=<整数>] -
Device to pass through to the container
要传递给容器的设备 - --features [force_rw_sys=<1|0>] [,fuse=<1|0>] [,keyctl=<1|0>] [,mknod=<1|0>] [,mount=<fstype;fstype;...>] [,nesting=<1|0>]
-
Allow containers access to advanced features.
允许容器访问高级功能。 - --force <boolean>
-
Allow to overwrite existing container.
允许覆盖现有容器。 - --hookscript <string>
-
Script that will be executed during various steps in the containers lifetime.
将在容器生命周期的各个阶段执行的脚本。 - --hostname <string>
-
Set a host name for the container.
为容器设置主机名。 - --ignore-unpack-errors <boolean>
-
Ignore errors when extracting the template.
在解压模板时忽略错误。 - --lock <backup | create | destroyed | disk | fstrim | migrate | mounted | rollback | snapshot | snapshot-delete>
-
Lock/unlock the container.
锁定/解锁容器。 -
--memory <integer> (16 - N) (default = 512)
--memory <整数> (16 - N) (默认 = 512) -
Amount of RAM for the container in MB.
容器的内存大小,单位为 MB。 -
--mp[n] [volume=]<volume> ,mp=<Path> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
--mp[n] [volume=]<卷> ,mp=<路径> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<选项[;选项...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<磁盘大小>] -
Use volume as container mount point. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
使用卷作为容器挂载点。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配一个新卷。 - --nameserver <string>
-
Sets DNS server IP address for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 服务器 IP 地址。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 - --net[n] name=<string> [,bridge=<bridge>] [,firewall=<1|0>] [,gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,hwaddr=<XX:XX:XX:XX:XX:XX>] [,ip=<(IPv4/CIDR|dhcp|manual)>] [,ip6=<(IPv6/CIDR|auto|dhcp|manual)>] [,link_down=<1|0>] [,mtu=<integer>] [,rate=<mbps>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,type=<veth>]
-
Specifies network interfaces for the container.
指定容器的网络接口。 -
--onboot <boolean> (default = 0)
--onboot <boolean>(默认值 = 0) -
Specifies whether a container will be started during system bootup.
指定容器是否在系统启动时启动。 - --ostype <alpine | archlinux | centos | debian | devuan | fedora | gentoo | nixos | opensuse | ubuntu | unmanaged>
-
OS type. This is used to setup configuration inside the container, and corresponds to lxc setup scripts in /usr/share/lxc/config/<ostype>.common.conf. Value unmanaged can be used to skip and OS specific setup.
操作系统类型。此项用于在容器内设置配置,对应于 /usr/share/lxc/config/<ostype>.common.conf 中的 lxc 设置脚本。值为 unmanaged 可用于跳过操作系统特定的设置。 - --password <password>
-
Sets root password inside container.
设置容器内的 root 密码。 - --pool <string>
-
Add the VM to the specified pool.
将虚拟机添加到指定的资源池。 -
--protection <boolean> (default = 0)
--protection <boolean>(默认值 = 0) -
Sets the protection flag of the container. This will prevent the CT or CT’s disk remove/update operation.
设置容器的保护标志。这将防止容器或容器的磁盘被删除/更新操作。 - --restore <boolean>
-
Mark this as restore task.
将此标记为恢复任务。 -
--rootfs [volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
--rootfs [volume=]<卷> [,acl=<1|0>] [,mountoptions=<选项[;选项...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<磁盘大小>] -
Use volume as container root.
使用卷作为容器根目录。 - --searchdomain <string> --searchdomain <字符串>
-
Sets DNS search domains for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 搜索域。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 -
--ssh-public-keys <filepath>
--ssh-public-keys <文件路径> -
Setup public SSH keys (one key per line, OpenSSH format).
设置公用 SSH 密钥(每行一个密钥,OpenSSH 格式)。 -
--start <boolean> (default = 0)
--start <布尔值>(默认 = 0) -
Start the CT after its creation finished successfully.
在容器创建成功完成后启动该容器。 - --startup `[[order=]\d+] [,up=\d+] [,down=\d+] `
-
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。order 是一个非负数,定义了一般的启动顺序。关闭时按相反顺序进行。此外,您可以设置 up 或 down 延迟(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的延迟时间。 -
--storage <storage ID> (default = local)
--storage <存储 ID>(默认 = local) -
Default Storage. 默认存储。
-
--swap <integer> (0 - N) (default = 512)
--swap <整数> (0 - N) (默认 = 512) -
Amount of SWAP for the container in MB.
容器的交换空间大小,单位为 MB。 - --tags <string> --tags <字符串>
-
Tags of the Container. This is only meta information.
容器的标签。这只是元信息。 -
--template <boolean> (default = 0)
--template <boolean>(默认 = 0) -
Enable/disable Template.
启用/禁用模板。 - --timezone <string>
-
Time zone to use in the container. If option isn’t set, then nothing will be done. Can be set to host to match the host time zone, or an arbitrary time zone option from /usr/share/zoneinfo/zone.tab
容器中使用的时区。如果未设置此选项,则不会进行任何操作。可以设置为 host 以匹配主机时区,或者设置为/usr/share/zoneinfo/zone.tab 中的任意时区选项 -
--tty <integer> (0 - 6) (default = 2)
--tty <整数> (0 - 6) (默认 = 2) -
Specify the number of tty available to the container
指定容器可用的 tty 数量 - --unique <boolean> --unique <布尔值>
-
Assign a unique random ethernet address.
分配一个唯一的随机以太网地址。Requires option(s): restore
需要选项:restore -
--unprivileged <boolean> (default = 0)
--unprivileged <布尔值>(默认 = 0) -
Makes the container run as unprivileged user. (Should not be modified manually.)
使容器以非特权用户身份运行。(不应手动修改。) - --unused[n] [volume=]<volume>
-
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。此项用于内部,不应手动修改。
pct delsnapshot <vmid> <snapname> [OPTIONS]
Delete a LXC snapshot. 删除 LXC 快照。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <snapname>: <string> <snapname>: <字符串>
-
The name of the snapshot.
快照的名称。 - --force <boolean>
-
For removal from config file, even if removing disk snapshots fails.
即使删除磁盘快照失败,也从配置文件中移除。
pct destroy <vmid> [OPTIONS]
Destroy the container (also delete all uses files).
销毁容器(同时删除所有使用的文件)。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--destroy-unreferenced-disks <boolean>
--destroy-unreferenced-disks <布尔值> -
If set, destroy additionally all disks with the VMID from all enabled storages which are not referenced in the config.
如果设置,额外销毁所有启用存储中与 VMID 相关但未在配置中引用的磁盘。 -
--force <boolean> (default = 0)
--force <boolean>(默认值 = 0) -
Force destroy, even if running.
强制销毁,即使正在运行。 -
--purge <boolean> (default = 0)
--purge <boolean>(默认值 = 0) -
Remove container from all related configurations. For example, backup jobs, replication jobs or HA. Related ACLs and Firewall entries will always be removed.
从所有相关配置中移除容器。例如,备份任务、复制任务或高可用性。相关的访问控制列表(ACL)和防火墙条目将始终被移除。
pct df <vmid>
Get the container’s current disk usage.
获取容器当前的磁盘使用情况。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct enter <vmid> [OPTIONS]
pct enter <vmid> [选项]
Launch a shell for the specified container.
为指定的容器启动一个 Shell。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--keep-env <boolean> (default = 1)
--keep-env <boolean>(默认值 = 1) -
Keep the current environment. This option will disabled by default with PVE 9. If you rely on a preserved environment, please use this option to be future-proof.
保持当前环境。此选项在 PVE 9 中默认禁用。如果您依赖于保留的环境,请使用此选项以确保未来兼容性。
pct exec <vmid> [<extra-args>] [OPTIONS]
pct exec <vmid> [<extra-args>] [选项]
Launch a command inside the specified container.
在指定的容器内启动命令。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <extra-args>: <array> <extra-args>: <数组>
-
Extra arguments as array
额外参数,作为数组形式 -
--keep-env <boolean> (default = 1)
--keep-env <boolean>(默认值 = 1) -
Keep the current environment. This option will disabled by default with PVE 9. If you rely on a preserved environment, please use this option to be future-proof.
保持当前环境。此选项在 PVE 9 中默认禁用。如果您依赖于保留的环境,请使用此选项以确保未来兼容性。
pct fsck <vmid> [OPTIONS]
pct fsck <vmid> [选项]
Run a filesystem check (fsck) on a container volume.
对容器卷运行文件系统检查(fsck)。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --device <mp0 | mp1 | mp10 | mp100 | mp101 | mp102 | mp103 | mp104 | mp105 | mp106 | mp107 | mp108 | mp109 | mp11 | mp110 | mp111 | mp112 | mp113 | mp114 | mp115 | mp116 | mp117 | mp118 | mp119 | mp12 | mp120 | mp121 | mp122 | mp123 | mp124 | mp125 | mp126 | mp127 | mp128 | mp129 | mp13 | mp130 | mp131 | mp132 | mp133 | mp134 | mp135 | mp136 | mp137 | mp138 | mp139 | mp14 | mp140 | mp141 | mp142 | mp143 | mp144 | mp145 | mp146 | mp147 | mp148 | mp149 | mp15 | mp150 | mp151 | mp152 | mp153 | mp154 | mp155 | mp156 | mp157 | mp158 | mp159 | mp16 | mp160 | mp161 | mp162 | mp163 | mp164 | mp165 | mp166 | mp167 | mp168 | mp169 | mp17 | mp170 | mp171 | mp172 | mp173 | mp174 | mp175 | mp176 | mp177 | mp178 | mp179 | mp18 | mp180 | mp181 | mp182 | mp183 | mp184 | mp185 | mp186 | mp187 | mp188 | mp189 | mp19 | mp190 | mp191 | mp192 | mp193 | mp194 | mp195 | mp196 | mp197 | mp198 | mp199 | mp2 | mp20 | mp200 | mp201 | mp202 | mp203 | mp204 | mp205 | mp206 | mp207 | mp208 | mp209 | mp21 | mp210 | mp211 | mp212 | mp213 | mp214 | mp215 | mp216 | mp217 | mp218 | mp219 | mp22 | mp220 | mp221 | mp222 | mp223 | mp224 | mp225 | mp226 | mp227 | mp228 | mp229 | mp23 | mp230 | mp231 | mp232 | mp233 | mp234 | mp235 | mp236 | mp237 | mp238 | mp239 | mp24 | mp240 | mp241 | mp242 | mp243 | mp244 | mp245 | mp246 | mp247 | mp248 | mp249 | mp25 | mp250 | mp251 | mp252 | mp253 | mp254 | mp255 | mp26 | mp27 | mp28 | mp29 | mp3 | mp30 | mp31 | mp32 | mp33 | mp34 | mp35 | mp36 | mp37 | mp38 | mp39 | mp4 | mp40 | mp41 | mp42 | mp43 | mp44 | mp45 | mp46 | mp47 | mp48 | mp49 | mp5 | mp50 | mp51 | mp52 | mp53 | mp54 | mp55 | mp56 | mp57 | mp58 | mp59 | mp6 | mp60 | mp61 | mp62 | mp63 | mp64 | mp65 | mp66 | mp67 | mp68 | mp69 | mp7 | mp70 | mp71 | mp72 | mp73 | mp74 | mp75 | mp76 | mp77 | mp78 | mp79 | mp8 | mp80 | mp81 | mp82 | mp83 | mp84 | mp85 | mp86 | mp87 | mp88 | mp89 | mp9 | mp90 | mp91 | mp92 | mp93 | mp94 | mp95 | mp96 | mp97 | mp98 | mp99 | rootfs>
-
A volume on which to run the filesystem check
用于运行文件系统检查的卷 -
--force <boolean> (default = 0)
--force <boolean>(默认 = 0) -
Force checking, even if the filesystem seems clean
强制检查,即使文件系统看起来是干净的
pct fstrim <vmid> [OPTIONS]
pct fstrim <vmid> [选项]
Run fstrim on a chosen CT and its mountpoints, except bind or read-only
mountpoints.
在选定的 CT 及其挂载点上运行 fstrim,绑定挂载点或只读挂载点除外。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --ignore-mountpoints <boolean>
-
Skip all mountpoints, only do fstrim on the container root.
跳过所有挂载点,仅对容器根目录执行 fstrim。
pct help [OPTIONS]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pct list pct 列表
LXC container index (per node).
LXC 容器索引(每个节点)。
pct listsnapshot <vmid> pct 列出快照 <vmid>
List all snapshots. 列出所有快照。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct migrate <vmid> <target> [OPTIONS]
pct migrate <vmid> <target> [选项]
Migrate the container to another node. Creates a new migration task.
将容器迁移到另一个节点。创建一个新的迁移任务。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <target>: <string> <target>:<字符串>
-
Target node. 目标节点。
-
--bwlimit <number> (0 - N) (default = migrate limit from datacenter or storage config)
--bwlimit <数字> (0 - N)(默认 = 来自数据中心或存储配置的迁移限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 - --online <boolean> --online <布尔值>
-
Use online/live migration.
使用在线/实时迁移。 - --restart <boolean>
-
Use restart migration 使用重启迁移
- --target-storage <string>
-
Mapping from source to target storages. Providing only a single storage ID maps all source storages to that storage. Providing the special value 1 will map each source storage to itself.
从源存储到目标存储的映射。仅提供单个存储 ID 时,会将所有源存储映射到该存储。提供特殊值 1 时,会将每个源存储映射到其自身。 -
--timeout <integer> (default = 180)
--timeout <整数>(默认值 = 180) -
Timeout in seconds for shutdown for restart migration
重启迁移时关闭的超时时间,单位为秒
pct mount <vmid>
Mount the container’s filesystem on the host. This will hold a lock on the
container and is meant for emergency maintenance only as it will prevent
further operations on the container other than start and stop.
将容器的文件系统挂载到主机上。这将对容器加锁,仅用于紧急维护,因为它会阻止对容器进行除启动和停止以外的其他操作。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct move-volume <vmid> <volume> [<storage>] [<target-vmid>] [<target-volume>] [OPTIONS]
pct move-volume <vmid> <volume> [<storage>] [<target-vmid>] [<target-volume>] [选项]
Move a rootfs-/mp-volume to a different storage or to a different
container.
将 rootfs-/mp-卷移动到不同的存储或不同的容器。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<volume>: <mp0 | mp1 | mp10 | mp100 | mp101 | mp102 | mp103 | mp104 | mp105 | mp106 | mp107 | mp108 | mp109 | mp11 | mp110 | mp111 | mp112 | mp113 | mp114 | mp115 | mp116 | mp117 | mp118 | mp119 | mp12 | mp120 | mp121 | mp122 | mp123 | mp124 | mp125 | mp126 | mp127 | mp128 | mp129 | mp13 | mp130 | mp131 | mp132 | mp133 | mp134 | mp135 | mp136 | mp137 | mp138 | mp139 | mp14 | mp140 | mp141 | mp142 | mp143 | mp144 | mp145 | mp146 | mp147 | mp148 | mp149 | mp15 | mp150 | mp151 | mp152 | mp153 | mp154 | mp155 | mp156 | mp157 | mp158 | mp159 | mp16 | mp160 | mp161 | mp162 | mp163 | mp164 | mp165 | mp166 | mp167 | mp168 | mp169 | mp17 | mp170 | mp171 | mp172 | mp173 | mp174 | mp175 | mp176 | mp177 | mp178 | mp179 | mp18 | mp180 | mp181 | mp182 | mp183 | mp184 | mp185 | mp186 | mp187 | mp188 | mp189 | mp19 | mp190 | mp191 | mp192 | mp193 | mp194 | mp195 | mp196 | mp197 | mp198 | mp199 | mp2 | mp20 | mp200 | mp201 | mp202 | mp203 | mp204 | mp205 | mp206 | mp207 | mp208 | mp209 | mp21 | mp210 | mp211 | mp212 | mp213 | mp214 | mp215 | mp216 | mp217 | mp218 | mp219 | mp22 | mp220 | mp221 | mp222 | mp223 | mp224 | mp225 | mp226 | mp227 | mp228 | mp229 | mp23 | mp230 | mp231 | mp232 | mp233 | mp234 | mp235 | mp236 | mp237 | mp238 | mp239 | mp24 | mp240 | mp241 | mp242 | mp243 | mp244 | mp245 | mp246 | mp247 | mp248 | mp249 | mp25 | mp250 | mp251 | mp252 | mp253 | mp254 | mp255 | mp26 | mp27 | mp28 | mp29 | mp3 | mp30 | mp31 | mp32 | mp33 | mp34 | mp35 | mp36 | mp37 | mp38 | mp39 | mp4 | mp40 | mp41 | mp42 | mp43 | mp44 | mp45 | mp46 | mp47 | mp48 | mp49 | mp5 | mp50 | mp51 | mp52 | mp53 | mp54 | mp55 | mp56 | mp57 | mp58 | mp59 | mp6 | mp60 | mp61 | mp62 | mp63 | mp64 | mp65 | mp66 | mp67 | mp68 | mp69 | mp7 | mp70 | mp71 | mp72 | mp73 | mp74 | mp75 | mp76 | mp77 | mp78 | mp79 | mp8 | mp80 | mp81 | mp82 | mp83 | mp84 | mp85 | mp86 | mp87 | mp88 | mp89 | mp9 | mp90 | mp91 | mp92 | mp93 | mp94 | mp95 | mp96 | mp97 | mp98 | mp99 | rootfs | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99>
<卷>: <mp0 | mp1 | mp10 | mp100 | mp101 | mp102 | mp103 | mp104 | mp105 | mp106 | mp107 | mp108 | mp109 | mp11 | mp110 | mp111 | mp112 | mp113 | mp114 | mp115 | mp116 | mp117 | mp118 | mp119 | mp12 | mp120 | mp121 | mp122 | mp123 | mp124 | mp125 | mp126 | mp127 | mp128 | mp129 | mp13 | mp130 | mp131 | mp132 | mp133 | mp134 | mp135 | mp136 | mp137 | mp138 | mp139 | mp14 | mp140 | mp141 | mp142 | mp143 | mp144 | mp145 | mp146 | mp147 | mp148 | mp149 | mp15 | mp150 | mp151 | mp152 | mp153 | mp154 | mp155 | mp156 | mp157 | mp158 | mp159 | mp16 | mp160 | mp161 | mp162 | mp163 | mp164 | mp165 | mp166 | mp167 | mp168 | mp169 | mp17 | mp170 | mp171 | mp172 | mp173 | mp174 | mp175 | mp176 | mp177 | mp178 | mp179 | mp18 | mp180 | mp181 | mp182 | mp183 | mp184 | mp185 | mp186 | mp187 | mp188 | mp189 | mp19 | mp190 | mp191 | mp192 | mp193 | mp194 | mp195 | mp196 | mp197 | mp198 | mp199 | mp2 | mp20 | mp200 | mp201 | mp202 | mp203 | mp204 | mp205 | mp206 | mp207 | mp208 | mp209 | mp21 | mp210 | mp211 | mp212 | mp213 | mp214 | mp215 | mp216 | mp217 | mp218 | mp219 | mp22 | mp220 | mp221 | mp222 | mp223 | mp224 | mp225 | mp226 | mp227 | mp228 | mp229 | mp23 | mp230 | mp231 | mp232 | mp233 | mp234 | mp235 | mp236 | mp237 | mp238 | mp239 | mp24 | mp240 | mp241 | mp242 | mp243 | mp244 | mp245 | mp246 | mp247 | mp248 | mp249 | mp25 | mp250 | mp251 | mp252 | mp253 | mp254 | mp255 | mp26 | mp27 | mp28 | mp29 | mp3 | mp30 | mp31 | mp32 | mp33 | mp34 | mp35 | mp36 | mp37 | mp38 | mp39 | mp4 | mp40 | mp41 | mp42 | mp43 | mp44 | mp45 | mp46 | mp47 | mp48 | mp49 | mp5 | mp50 | mp51 | mp52 | mp53 | mp54 | mp55 | mp56 | mp57 | mp58 | mp59 | mp6 | mp60 | mp61 | mp62 | mp63 | mp64 | mp65 | mp66 | mp67 | mp68 | mp69 | mp7 | mp70 | mp71 | mp72 | mp73 | mp74 | mp75 | mp76 | mp77 | mp78 | mp79 | mp8 | mp80 | mp81 | mp82 | mp83 | mp84 | mp85 | mp86 | mp87 | mp88 | mp89 | mp9 | mp90 | mp91 | mp92 | mp93 | mp94 | mp95 | mp96 | mp97 | mp98 | mp99 | rootfs | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99> -
Volume which will be moved.
将被移动的卷。 - <storage>: <storage ID>
-
Target Storage. 目标存储。
-
<target-vmid>: <integer> (100 - 999999999)
<target-vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<target-volume>: <mp0 | mp1 | mp10 | mp100 | mp101 | mp102 | mp103 | mp104 | mp105 | mp106 | mp107 | mp108 | mp109 | mp11 | mp110 | mp111 | mp112 | mp113 | mp114 | mp115 | mp116 | mp117 | mp118 | mp119 | mp12 | mp120 | mp121 | mp122 | mp123 | mp124 | mp125 | mp126 | mp127 | mp128 | mp129 | mp13 | mp130 | mp131 | mp132 | mp133 | mp134 | mp135 | mp136 | mp137 | mp138 | mp139 | mp14 | mp140 | mp141 | mp142 | mp143 | mp144 | mp145 | mp146 | mp147 | mp148 | mp149 | mp15 | mp150 | mp151 | mp152 | mp153 | mp154 | mp155 | mp156 | mp157 | mp158 | mp159 | mp16 | mp160 | mp161 | mp162 | mp163 | mp164 | mp165 | mp166 | mp167 | mp168 | mp169 | mp17 | mp170 | mp171 | mp172 | mp173 | mp174 | mp175 | mp176 | mp177 | mp178 | mp179 | mp18 | mp180 | mp181 | mp182 | mp183 | mp184 | mp185 | mp186 | mp187 | mp188 | mp189 | mp19 | mp190 | mp191 | mp192 | mp193 | mp194 | mp195 | mp196 | mp197 | mp198 | mp199 | mp2 | mp20 | mp200 | mp201 | mp202 | mp203 | mp204 | mp205 | mp206 | mp207 | mp208 | mp209 | mp21 | mp210 | mp211 | mp212 | mp213 | mp214 | mp215 | mp216 | mp217 | mp218 | mp219 | mp22 | mp220 | mp221 | mp222 | mp223 | mp224 | mp225 | mp226 | mp227 | mp228 | mp229 | mp23 | mp230 | mp231 | mp232 | mp233 | mp234 | mp235 | mp236 | mp237 | mp238 | mp239 | mp24 | mp240 | mp241 | mp242 | mp243 | mp244 | mp245 | mp246 | mp247 | mp248 | mp249 | mp25 | mp250 | mp251 | mp252 | mp253 | mp254 | mp255 | mp26 | mp27 | mp28 | mp29 | mp3 | mp30 | mp31 | mp32 | mp33 | mp34 | mp35 | mp36 | mp37 | mp38 | mp39 | mp4 | mp40 | mp41 | mp42 | mp43 | mp44 | mp45 | mp46 | mp47 | mp48 | mp49 | mp5 | mp50 | mp51 | mp52 | mp53 | mp54 | mp55 | mp56 | mp57 | mp58 | mp59 | mp6 | mp60 | mp61 | mp62 | mp63 | mp64 | mp65 | mp66 | mp67 | mp68 | mp69 | mp7 | mp70 | mp71 | mp72 | mp73 | mp74 | mp75 | mp76 | mp77 | mp78 | mp79 | mp8 | mp80 | mp81 | mp82 | mp83 | mp84 | mp85 | mp86 | mp87 | mp88 | mp89 | mp9 | mp90 | mp91 | mp92 | mp93 | mp94 | mp95 | mp96 | mp97 | mp98 | mp99 | rootfs | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99>
<目标卷>: <mp0 | mp1 | mp10 | mp100 | mp101 | mp102 | mp103 | mp104 | mp105 | mp106 | mp107 | mp108 | mp109 | mp11 | mp110 | mp111 | mp112 | mp113 | mp114 | mp115 | mp116 | mp117 | mp118 | mp119 | mp12 | mp120 | mp121 | mp122 | mp123 | mp124 | mp125 | mp126 | mp127 | mp128 | mp129 | mp13 | mp130 | mp131 | mp132 | mp133 | mp134 | mp135 | mp136 | mp137 | mp138 | mp139 | mp14 | mp140 | mp141 | mp142 | mp143 | mp144 | mp145 | mp146 | mp147 | mp148 | mp149 | mp15 | mp150 | mp151 | mp152 | mp153 | mp154 | mp155 | mp156 | mp157 | mp158 | mp159 | mp16 | mp160 | mp161 | mp162 | mp163 | mp164 | mp165 | mp166 | mp167 | mp168 | mp169 | mp17 | mp170 | mp171 | mp172 | mp173 | mp174 | mp175 | mp176 | mp177 | mp178 | mp179 | mp18 | mp180 | mp181 | mp182 | mp183 | mp184 | mp185 | mp186 | mp187 | mp188 | mp189 | mp19 | mp190 | mp191 | mp192 | mp193 | mp194 | mp195 | mp196 | mp197 | mp198 | mp199 | mp2 | mp20 | mp200 | mp201 | mp202 | mp203 | mp204 | mp205 | mp206 | mp207 | mp208 | mp209 | mp21 | mp210 | mp211 | mp212 | mp213 | mp214 | mp215 | mp216 | mp217 | mp218 | mp219 | mp22 | mp220 | mp221 | mp222 | mp223 | mp224 | mp225 | mp226 | mp227 | mp228 | mp229 | mp23 | mp230 | mp231 | mp232 | mp233 | mp234 | mp235 | mp236 | mp237 | mp238 | mp239 | mp24 | mp240 | mp241 | mp242 | mp243 | mp244 | mp245 | mp246 | mp247 | mp248 | mp249 | mp25 | mp250 | mp251 | mp252 | mp253 | mp254 | mp255 | mp26 | mp27 | mp28 | mp29 | mp3 | mp30 | mp31 | mp32 | mp33 | mp34 | mp35 | mp36 | mp37 | mp38 | mp39 | mp4 | mp40 | mp41 | mp42 | mp43 | mp44 | mp45 | mp46 | mp47 | mp48 | mp49 | mp5 | mp50 | mp51 | mp52 | mp53 | mp54 | mp55 | mp56 | mp57 | mp58 | mp59 | mp6 | mp60 | mp61 | mp62 | mp63 | mp64 | mp65 | mp66 | mp67 | mp68 | mp69 | mp7 | mp70 | mp71 | mp72 | mp73 | mp74 | mp75 | mp76 | mp77 | mp78 | mp79 | mp8 | mp80 | mp81 | mp82 | mp83 | mp84 | mp85 | mp86 | mp87 | mp88 | mp89 | mp9 | mp90 | mp91 | mp92 | mp93 | mp94 | mp95 | mp96 | mp97 | mp98 | mp99 | rootfs | unused0 | unused1 | unused10 | unused100 | unused101 | unused102 | unused103 | unused104 | unused105 | unused106 | unused107 | unused108 | unused109 | unused11 | unused110 | unused111 | unused112 | unused113 | unused114 | unused115 | unused116 | unused117 | unused118 | unused119 | unused12 | unused120 | unused121 | unused122 | unused123 | unused124 | unused125 | unused126 | unused127 | unused128 | unused129 | unused13 | unused130 | unused131 | unused132 | unused133 | unused134 | unused135 | unused136 | unused137 | unused138 | unused139 | unused14 | unused140 | unused141 | unused142 | unused143 | unused144 | unused145 | unused146 | unused147 | unused148 | unused149 | unused15 | unused150 | unused151 | unused152 | unused153 | unused154 | unused155 | unused156 | unused157 | unused158 | unused159 | unused16 | unused160 | unused161 | unused162 | unused163 | unused164 | unused165 | unused166 | unused167 | unused168 | unused169 | unused17 | unused170 | unused171 | unused172 | unused173 | unused174 | unused175 | unused176 | unused177 | unused178 | unused179 | unused18 | unused180 | unused181 | unused182 | unused183 | unused184 | unused185 | unused186 | unused187 | unused188 | unused189 | unused19 | unused190 | unused191 | unused192 | unused193 | unused194 | unused195 | unused196 | unused197 | unused198 | unused199 | unused2 | unused20 | unused200 | unused201 | unused202 | unused203 | unused204 | unused205 | unused206 | unused207 | unused208 | unused209 | unused21 | unused210 | unused211 | unused212 | unused213 | unused214 | unused215 | unused216 | unused217 | unused218 | unused219 | unused22 | unused220 | unused221 | unused222 | unused223 | unused224 | unused225 | unused226 | unused227 | unused228 | unused229 | unused23 | unused230 | unused231 | unused232 | unused233 | unused234 | unused235 | unused236 | unused237 | unused238 | unused239 | unused24 | unused240 | unused241 | unused242 | unused243 | unused244 | unused245 | unused246 | unused247 | unused248 | unused249 | unused25 | unused250 | unused251 | unused252 | unused253 | unused254 | unused255 | unused26 | unused27 | unused28 | unused29 | unused3 | unused30 | unused31 | unused32 | unused33 | unused34 | unused35 | unused36 | unused37 | unused38 | unused39 | unused4 | unused40 | unused41 | unused42 | unused43 | unused44 | unused45 | unused46 | unused47 | unused48 | unused49 | unused5 | unused50 | unused51 | unused52 | unused53 | unused54 | unused55 | unused56 | unused57 | unused58 | unused59 | unused6 | unused60 | unused61 | unused62 | unused63 | unused64 | unused65 | unused66 | unused67 | unused68 | unused69 | unused7 | unused70 | unused71 | unused72 | unused73 | unused74 | unused75 | unused76 | unused77 | unused78 | unused79 | unused8 | unused80 | unused81 | unused82 | unused83 | unused84 | unused85 | unused86 | unused87 | unused88 | unused89 | unused9 | unused90 | unused91 | unused92 | unused93 | unused94 | unused95 | unused96 | unused97 | unused98 | unused99> -
The config key the volume will be moved to. Default is the source volume key.
卷将被移动到的配置键。默认是源卷键。 -
--bwlimit <number> (0 - N) (default = clone limit from datacenter or storage config)
--bwlimit <数字> (0 - N)(默认 = 来自数据中心或存储配置的克隆限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 -
--delete <boolean> (default = 0)
--delete <boolean> (默认 = 0) -
Delete the original volume after successful copy. By default the original is kept as an unused volume entry.
成功复制后删除原始卷。默认情况下,原始卷作为未使用的卷条目保留。 - --digest <string>
-
Prevent changes if current configuration file has different SHA1 " . "digest. This can be used to prevent concurrent modifications.
如果当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于防止并发修改。 - --target-digest <string>
-
Prevent changes if current configuration file of the target " . "container has a different SHA1 digest. This can be used to prevent " . "concurrent modifications.
如果目标容器的当前配置文件具有不同的 SHA1 摘要,则阻止更改。此功能可用于防止并发修改。
pct move_volume
An alias for pct move-volume.
pct move-volume 的别名。
pct pending <vmid>
Get container configuration, including pending changes.
获取容器配置,包括待处理的更改。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct pull <vmid> <path> <destination> [OPTIONS]
pct pull <vmid> <path> <destination> [选项]
Copy a file from the container to the local system.
将文件从容器复制到本地系统。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <path>: <string> <path>:<string>
-
Path to a file inside the container to pull.
容器内要拉取的文件路径。 - <destination>: <string> <destination>:<string>
-
Destination 目标位置
- --group <string>
-
Owner group name or id.
所有者组名称或 ID。 - --perms <string>
-
File permissions to use (octal by default, prefix with 0x for hexadecimal).
要使用的文件权限(默认八进制,十六进制前缀为 0x)。 - --user <string>
-
Owner user name or id.
所有者用户名或 ID。
pct push <vmid> <file> <destination> [OPTIONS]
Copy a local file to the container.
将本地文件复制到容器中。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <file>: <string> <file>:<字符串>
-
Path to a local file.
本地文件的路径。 - <destination>: <string>
-
Destination inside the container to write to.
容器内写入的目标位置。 - --group <string>
-
Owner group name or id. When using a name it must exist inside the container.
所有者组名或 ID。使用名称时,该名称必须存在于容器内。 - --perms <string>
-
File permissions to use (octal by default, prefix with 0x for hexadecimal).
文件权限(默认八进制,十六进制前缀为 0x)。 - --user <string>
-
Owner user name or id. When using a name it must exist inside the container.
所有者用户名或 ID。使用用户名时,该用户必须存在于容器内。
pct reboot <vmid> [OPTIONS]
pct reboot <vmid> [选项]
Reboot the container by shutting it down, and starting it again. Applies
pending changes.
通过关闭容器并重新启动来重启容器。应用待处理的更改。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--timeout <integer> (0 - N)
--timeout <整数> (0 - N) -
Wait maximal timeout seconds for the shutdown.
等待最多 timeout 秒以完成关机。
pct remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]
pct remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <字符串> --target-storage <字符串> [选项]
Migrate container to a remote cluster. Creates a new migration task.
EXPERIMENTAL feature!
将容器迁移到远程集群。创建一个新的迁移任务。实验性功能!
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
<target-vmid>: <integer> (100 - 999999999)
<target-vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <target-endpoint>: apitoken=<PVEAPIToken=user@realm!token=SECRET> ,host=<ADDRESS> [,fingerprint=<FINGERPRINT>] [,port=<PORT>]
-
Remote target endpoint 远程目标端点
-
--bwlimit <integer> (0 - N) (default = migrate limit from datacenter or storage config)
--bwlimit <整数> (0 - N) (默认 = 从数据中心或存储配置中迁移限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 -
--delete <boolean> (default = 0)
--delete <boolean>(默认值 = 0) -
Delete the original CT and related data after successful migration. By default the original CT is kept on the source cluster in a stopped state.
迁移成功后删除原始 CT 及相关数据。默认情况下,原始 CT 会以停止状态保留在源集群上。 - --online <boolean>
-
Use online/live migration.
使用在线/实时迁移。 - --restart <boolean>
-
Use restart migration 使用重启迁移
- --target-bridge <string>
-
Mapping from source to target bridges. Providing only a single bridge ID maps all source bridges to that bridge. Providing the special value 1 will map each source bridge to itself.
从源桥接到目标桥接的映射。仅提供单个桥接 ID 时,会将所有源桥接映射到该桥接。提供特殊值 1 时,会将每个源桥接映射到其自身。 - --target-storage <string>
-
Mapping from source to target storages. Providing only a single storage ID maps all source storages to that storage. Providing the special value 1 will map each source storage to itself.
从源存储到目标存储的映射。仅提供单个存储 ID 时,会将所有源存储映射到该存储。提供特殊值 1 时,会将每个源存储映射到其自身。 -
--timeout <integer> (default = 180)
--timeout <integer> (默认 = 180) -
Timeout in seconds for shutdown for restart migration
重启迁移时关闭的超时时间,单位为秒。
pct rescan [OPTIONS] pct rescan [选项]
Rescan all storages and update disk sizes and unused disk images.
重新扫描所有存储并更新磁盘大小和未使用的磁盘映像。
-
--dryrun <boolean> (default = 0)
--dryrun <布尔值>(默认 = 0) -
Do not actually write changes to the configuration.
不实际写入配置更改。 -
--vmid <integer> (100 - 999999999)
--vmid <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct resize <vmid> <disk> <size> [OPTIONS]
pct resize <vmid> <磁盘> <大小> [选项]
Resize a container mount point.
调整容器挂载点的大小。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <disk>: <mp0 | mp1 | mp10 | mp100 | mp101 | mp102 | mp103 | mp104 | mp105 | mp106 | mp107 | mp108 | mp109 | mp11 | mp110 | mp111 | mp112 | mp113 | mp114 | mp115 | mp116 | mp117 | mp118 | mp119 | mp12 | mp120 | mp121 | mp122 | mp123 | mp124 | mp125 | mp126 | mp127 | mp128 | mp129 | mp13 | mp130 | mp131 | mp132 | mp133 | mp134 | mp135 | mp136 | mp137 | mp138 | mp139 | mp14 | mp140 | mp141 | mp142 | mp143 | mp144 | mp145 | mp146 | mp147 | mp148 | mp149 | mp15 | mp150 | mp151 | mp152 | mp153 | mp154 | mp155 | mp156 | mp157 | mp158 | mp159 | mp16 | mp160 | mp161 | mp162 | mp163 | mp164 | mp165 | mp166 | mp167 | mp168 | mp169 | mp17 | mp170 | mp171 | mp172 | mp173 | mp174 | mp175 | mp176 | mp177 | mp178 | mp179 | mp18 | mp180 | mp181 | mp182 | mp183 | mp184 | mp185 | mp186 | mp187 | mp188 | mp189 | mp19 | mp190 | mp191 | mp192 | mp193 | mp194 | mp195 | mp196 | mp197 | mp198 | mp199 | mp2 | mp20 | mp200 | mp201 | mp202 | mp203 | mp204 | mp205 | mp206 | mp207 | mp208 | mp209 | mp21 | mp210 | mp211 | mp212 | mp213 | mp214 | mp215 | mp216 | mp217 | mp218 | mp219 | mp22 | mp220 | mp221 | mp222 | mp223 | mp224 | mp225 | mp226 | mp227 | mp228 | mp229 | mp23 | mp230 | mp231 | mp232 | mp233 | mp234 | mp235 | mp236 | mp237 | mp238 | mp239 | mp24 | mp240 | mp241 | mp242 | mp243 | mp244 | mp245 | mp246 | mp247 | mp248 | mp249 | mp25 | mp250 | mp251 | mp252 | mp253 | mp254 | mp255 | mp26 | mp27 | mp28 | mp29 | mp3 | mp30 | mp31 | mp32 | mp33 | mp34 | mp35 | mp36 | mp37 | mp38 | mp39 | mp4 | mp40 | mp41 | mp42 | mp43 | mp44 | mp45 | mp46 | mp47 | mp48 | mp49 | mp5 | mp50 | mp51 | mp52 | mp53 | mp54 | mp55 | mp56 | mp57 | mp58 | mp59 | mp6 | mp60 | mp61 | mp62 | mp63 | mp64 | mp65 | mp66 | mp67 | mp68 | mp69 | mp7 | mp70 | mp71 | mp72 | mp73 | mp74 | mp75 | mp76 | mp77 | mp78 | mp79 | mp8 | mp80 | mp81 | mp82 | mp83 | mp84 | mp85 | mp86 | mp87 | mp88 | mp89 | mp9 | mp90 | mp91 | mp92 | mp93 | mp94 | mp95 | mp96 | mp97 | mp98 | mp99 | rootfs>
-
The disk you want to resize.
您想要调整大小的磁盘。 - <size>: \+?\d+(\.\d+)?[KMGT]?
-
The new size. With the + sign the value is added to the actual size of the volume and without it, the value is taken as an absolute one. Shrinking disk size is not supported.
新的大小。带有 + 符号时,数值将被加到卷的当前大小上;不带 + 符号时,数值被视为绝对值。不支持缩小磁盘大小。 - --digest <string>
-
Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.
如果当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于防止并发修改。
pct restore <vmid> <ostemplate> [OPTIONS]
Create or restore a container.
创建或恢复容器。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <ostemplate>: <string> <ostemplate>: <字符串>
-
The OS template or backup file.
操作系统模板或备份文件。 -
--arch <amd64 | arm64 | armhf | i386 | riscv32 | riscv64> (default = amd64)
--arch <amd64 | arm64 | armhf | i386 | riscv32 | riscv64>(默认 = amd64) -
OS architecture type. 操作系统架构类型。
-
--bwlimit <number> (0 - N) (default = restore limit from datacenter or storage config)
--bwlimit <数字>(0 - N)(默认 = 从数据中心或存储配置恢复限制) -
Override I/O bandwidth limit (in KiB/s).
覆盖 I/O 带宽限制(以 KiB/s 为单位)。 -
--cmode <console | shell | tty> (default = tty)
--cmode <console | shell | tty>(默认 = tty) -
Console mode. By default, the console command tries to open a connection to one of the available tty devices. By setting cmode to console it tries to attach to /dev/console instead. If you set cmode to shell, it simply invokes a shell inside the container (no login).
控制台模式。默认情况下,console 命令尝试连接到可用的 tty 设备之一。通过将 cmode 设置为 console,它会尝试连接到 /dev/console。若将 cmode 设置为 shell,则会在容器内直接调用一个 shell(无登录)。 -
--console <boolean> (default = 1)
--console <boolean>(默认 = 1) -
Attach a console device (/dev/console) to the container.
将控制台设备(/dev/console)连接到容器。 -
--cores <integer> (1 - 8192)
--cores <整数> (1 - 8192) -
The number of cores assigned to the container. A container can use all available cores by default.
分配给容器的核心数。容器默认可以使用所有可用核心。 -
--cpulimit <number> (0 - 8192) (default = 0)
--cpulimit <数字> (0 - 8192) (默认 = 0) -
Limit of CPU usage.
CPU 使用限制。If the computer has 2 CPUs, it has a total of 2 CPU time. Value 0 indicates no CPU limit.
如果计算机有 2 个 CPU,则总共有 2 个 CPU 时间。值 0 表示没有 CPU 限制。 -
--cpuunits <integer> (0 - 500000) (default = cgroup v1: 1024, cgroup v2: 100)
--cpuunits <整数> (0 - 500000)(默认值 = cgroup v1: 1024,cgroup v2: 100) -
CPU weight for a container, will be clamped to [1, 10000] in cgroup v2.
容器的 CPU 权重,在 cgroup v2 中将限制在[1, 10000]范围内。 -
--debug <boolean> (default = 0)
--debug <布尔值>(默认值 = 0) -
Try to be more verbose. For now this only enables debug log-level on start.
尝试更详细一些。目前这仅在启动时启用调试日志级别。 - --description <string>
-
Description for the Container. Shown in the web-interface CT’s summary. This is saved as comment inside the configuration file.
容器的描述。在网页界面 CT 摘要中显示。此内容作为注释保存在配置文件中。 -
--dev[n] [[path=]<Path>] [,deny-write=<1|0>] [,gid=<integer>] [,mode=<Octal access mode>] [,uid=<integer>]
--dev[n] [[path=]<路径>] [,deny-write=<1|0>] [,gid=<整数>] [,mode=<八进制访问模式>] [,uid=<整数>] -
Device to pass through to the container
要传递给容器的设备 - --features [force_rw_sys=<1|0>] [,fuse=<1|0>] [,keyctl=<1|0>] [,mknod=<1|0>] [,mount=<fstype;fstype;...>] [,nesting=<1|0>]
-
Allow containers access to advanced features.
允许容器访问高级功能。 - --force <boolean>
-
Allow to overwrite existing container.
允许覆盖现有容器。 - --hookscript <string>
-
Script that will be executed during various steps in the containers lifetime.
将在容器生命周期的各个阶段执行的脚本。 - --hostname <string>
-
Set a host name for the container.
为容器设置主机名。 - --ignore-unpack-errors <boolean>
-
Ignore errors when extracting the template.
在解压模板时忽略错误。 - --lock <backup | create | destroyed | disk | fstrim | migrate | mounted | rollback | snapshot | snapshot-delete>
-
Lock/unlock the container.
锁定/解锁容器。 -
--memory <integer> (16 - N) (default = 512)
--memory <整数> (16 - N) (默认 = 512) -
Amount of RAM for the container in MB.
容器的内存大小,单位为 MB。 -
--mp[n] [volume=]<volume> ,mp=<Path> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
--mp[n] [volume=]<卷> ,mp=<路径> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<选项[;选项...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<磁盘大小>] -
Use volume as container mount point. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
使用卷作为容器挂载点。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配一个新卷。 - --nameserver <string>
-
Sets DNS server IP address for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 服务器 IP 地址。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 - --net[n] name=<string> [,bridge=<bridge>] [,firewall=<1|0>] [,gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,hwaddr=<XX:XX:XX:XX:XX:XX>] [,ip=<(IPv4/CIDR|dhcp|manual)>] [,ip6=<(IPv6/CIDR|auto|dhcp|manual)>] [,link_down=<1|0>] [,mtu=<integer>] [,rate=<mbps>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,type=<veth>]
-
Specifies network interfaces for the container.
指定容器的网络接口。 -
--onboot <boolean> (default = 0)
--onboot <boolean>(默认值 = 0) -
Specifies whether a container will be started during system bootup.
指定容器是否在系统启动时启动。 - --ostype <alpine | archlinux | centos | debian | devuan | fedora | gentoo | nixos | opensuse | ubuntu | unmanaged>
-
OS type. This is used to setup configuration inside the container, and corresponds to lxc setup scripts in /usr/share/lxc/config/<ostype>.common.conf. Value unmanaged can be used to skip and OS specific setup.
操作系统类型。此项用于在容器内设置配置,对应于 /usr/share/lxc/config/<ostype>.common.conf 中的 lxc 设置脚本。值为 unmanaged 可用于跳过操作系统特定的设置。 - --password <password>
-
Sets root password inside container.
设置容器内的 root 密码。 - --pool <string>
-
Add the VM to the specified pool.
将虚拟机添加到指定的资源池。 -
--protection <boolean> (default = 0)
--protection <boolean>(默认值 = 0) -
Sets the protection flag of the container. This will prevent the CT or CT’s disk remove/update operation.
设置容器的保护标志。这将防止容器或容器的磁盘被删除/更新操作。 -
--rootfs [volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
--rootfs [volume=]<卷> [,acl=<1|0>] [,mountoptions=<选项[;选项...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<磁盘大小>] -
Use volume as container root.
使用卷作为容器根目录。 - --searchdomain <string> --searchdomain <字符串>
-
Sets DNS search domains for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 搜索域。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 -
--ssh-public-keys <filepath>
--ssh-public-keys <文件路径> -
Setup public SSH keys (one key per line, OpenSSH format).
设置公共 SSH 密钥(每行一个密钥,OpenSSH 格式)。 -
--start <boolean> (default = 0)
--start <boolean>(默认值 = 0) -
Start the CT after its creation finished successfully.
在 CT 创建成功完成后启动该 CT。 - --startup `[[order=]\d+] [,up=\d+] [,down=\d+] `
-
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。顺序是一个非负数,定义了一般的启动顺序。关闭时按相反顺序进行。此外,您可以设置启动或关闭的延迟时间(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的时间。 -
--storage <storage ID> (default = local)
--storage <存储 ID>(默认 = local) -
Default Storage. 默认存储。
-
--swap <integer> (0 - N) (default = 512)
--swap <整数>(0 - N)(默认 = 512) -
Amount of SWAP for the container in MB.
容器的交换空间大小,单位为 MB。 - --tags <string>
-
Tags of the Container. This is only meta information.
容器的标签。这仅是元信息。 -
--template <boolean> (default = 0)
--template <boolean>(默认值 = 0) -
Enable/disable Template.
启用/禁用模板。 - --timezone <string> --timezone <字符串>
-
Time zone to use in the container. If option isn’t set, then nothing will be done. Can be set to host to match the host time zone, or an arbitrary time zone option from /usr/share/zoneinfo/zone.tab
容器中使用的时区。如果未设置此选项,则不会进行任何操作。可以设置为 host 以匹配主机时区,或设置为 /usr/share/zoneinfo/zone.tab 中的任意时区选项。 -
--tty <integer> (0 - 6) (default = 2)
--tty <整数> (0 - 6) (默认 = 2) -
Specify the number of tty available to the container
指定容器可用的 tty 数量 - --unique <boolean>
-
Assign a unique random ethernet address.
分配一个唯一的随机以太网地址。Requires option(s): restore
需要选项:restore -
--unprivileged <boolean> (default = 0)
--unprivileged <boolean>(默认值 = 0) -
Makes the container run as unprivileged user. (Should not be modified manually.)
使容器以非特权用户身份运行。(不应手动修改。) - --unused[n] [volume=]<volume>
-
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。此参数供内部使用,不应手动修改。
pct resume <vmid>
Resume the container. 恢复容器。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct rollback <vmid> <snapname> [OPTIONS]
pct rollback <vmid> <snapname> [选项]
Rollback LXC state to specified snapshot.
将 LXC 状态回滚到指定的快照。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <snapname>: <string>
-
The name of the snapshot.
快照的名称。 -
--start <boolean> (default = 0)
--start <boolean> (默认 = 0) -
Whether the container should get started after rolling back successfully
回滚成功后容器是否应启动
pct set <vmid> [OPTIONS] pct set <vmid> [选项]
Set container options. 设置容器选项。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--arch <amd64 | arm64 | armhf | i386 | riscv32 | riscv64> (default = amd64)
--arch <amd64 | arm64 | armhf | i386 | riscv32 | riscv64>(默认 = amd64) -
OS architecture type. 操作系统架构类型。
-
--cmode <console | shell | tty> (default = tty)
--cmode <console | shell | tty>(默认 = tty) -
Console mode. By default, the console command tries to open a connection to one of the available tty devices. By setting cmode to console it tries to attach to /dev/console instead. If you set cmode to shell, it simply invokes a shell inside the container (no login).
控制台模式。默认情况下,console 命令尝试连接到一个可用的 tty 设备。将 cmode 设置为 console 时,它会尝试连接到 /dev/console。将 cmode 设置为 shell 时,则会在容器内直接调用一个 Shell(无登录)。 -
--console <boolean> (default = 1)
--console <布尔值>(默认 = 1) -
Attach a console device (/dev/console) to the container.
将控制台设备(/dev/console)附加到容器。 -
--cores <integer> (1 - 8192)
--cores <整数>(1 - 8192) -
The number of cores assigned to the container. A container can use all available cores by default.
分配给容器的核心数。容器默认可以使用所有可用核心。 -
--cpulimit <number> (0 - 8192) (default = 0)
--cpulimit <数字> (0 - 8192) (默认 = 0) -
Limit of CPU usage.
CPU 使用限制。If the computer has 2 CPUs, it has a total of 2 CPU time. Value 0 indicates no CPU limit.
如果计算机有 2 个 CPU,则总共有 2 个 CPU 时间。值为 0 表示没有 CPU 限制。 -
--cpuunits <integer> (0 - 500000) (default = cgroup v1: 1024, cgroup v2: 100)
--cpuunits <整数> (0 - 500000) (默认 = cgroup v1: 1024,cgroup v2: 100) -
CPU weight for a container, will be clamped to [1, 10000] in cgroup v2.
容器的 CPU 权重,在 cgroup v2 中将限制在[1, 10000]范围内。 -
--debug <boolean> (default = 0)
--debug <boolean>(默认值 = 0) -
Try to be more verbose. For now this only enables debug log-level on start.
尝试输出更多详细信息。目前这仅在启动时启用调试日志级别。 - --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --description <string> --description <字符串>
-
Description for the Container. Shown in the web-interface CT’s summary. This is saved as comment inside the configuration file.
容器的描述。在网页界面 CT 摘要中显示。此内容作为注释保存在配置文件中。 -
--dev[n] [[path=]<Path>] [,deny-write=<1|0>] [,gid=<integer>] [,mode=<Octal access mode>] [,uid=<integer>]
--dev[n] [[path=]<路径>] [,deny-write=<1|0>] [,gid=<整数>] [,mode=<八进制访问模式>] [,uid=<整数>] -
Device to pass through to the container
要传递给容器的设备 - --digest <string> --digest <字符串>
-
Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.
如果当前配置文件的 SHA1 摘要不同,则阻止更改。此功能可用于防止并发修改。 - --features [force_rw_sys=<1|0>] [,fuse=<1|0>] [,keyctl=<1|0>] [,mknod=<1|0>] [,mount=<fstype;fstype;...>] [,nesting=<1|0>]
-
Allow containers access to advanced features.
允许容器访问高级功能。 - --hookscript <string> --hookscript <字符串>
-
Script that will be executed during various steps in the containers lifetime.
将在容器生命周期的各个步骤中执行的脚本。 - --hostname <string> --hostname <字符串>
-
Set a host name for the container.
为容器设置主机名。 - --lock <backup | create | destroyed | disk | fstrim | migrate | mounted | rollback | snapshot | snapshot-delete>
-
Lock/unlock the container.
锁定/解锁容器。 -
--memory <integer> (16 - N) (default = 512)
--memory <整数> (16 - N) (默认 = 512) -
Amount of RAM for the container in MB.
容器的内存大小,单位为 MB。 -
--mp[n] [volume=]<volume> ,mp=<Path> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
--mp[n] [volume=]<volume> ,mp=<路径> [,acl=<1|0>] [,backup=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<磁盘大小>] -
Use volume as container mount point. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
使用卷作为容器挂载点。使用特殊语法 STORAGE_ID:SIZE_IN_GiB 来分配新卷。 - --nameserver <string> --nameserver <字符串>
-
Sets DNS server IP address for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 服务器 IP 地址。如果您既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 - --net[n] name=<string> [,bridge=<bridge>] [,firewall=<1|0>] [,gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,hwaddr=<XX:XX:XX:XX:XX:XX>] [,ip=<(IPv4/CIDR|dhcp|manual)>] [,ip6=<(IPv6/CIDR|auto|dhcp|manual)>] [,link_down=<1|0>] [,mtu=<integer>] [,rate=<mbps>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,type=<veth>]
-
Specifies network interfaces for the container.
指定容器的网络接口。 -
--onboot <boolean> (default = 0)
--onboot <boolean>(默认 = 0) -
Specifies whether a container will be started during system bootup.
指定容器是否会在系统启动时启动。 - --ostype <alpine | archlinux | centos | debian | devuan | fedora | gentoo | nixos | opensuse | ubuntu | unmanaged>
-
OS type. This is used to setup configuration inside the container, and corresponds to lxc setup scripts in /usr/share/lxc/config/<ostype>.common.conf. Value unmanaged can be used to skip and OS specific setup.
操作系统类型。用于在容器内设置配置,对应于 /usr/share/lxc/config/<ostype>.common.conf 中的 lxc 设置脚本。值 unmanaged 可用于跳过操作系统特定的设置。 -
--protection <boolean> (default = 0)
--protection <boolean>(默认 = 0) -
Sets the protection flag of the container. This will prevent the CT or CT’s disk remove/update operation.
设置容器的保护标志。这将防止容器或容器的磁盘被移除/更新操作。 - --revert <string>
-
Revert a pending change.
还原一个待处理的更改。 -
--rootfs [volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>]
--rootfs [volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<磁盘大小>] -
Use volume as container root.
使用卷作为容器根目录。 - --searchdomain <string>
-
Sets DNS search domains for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
为容器设置 DNS 搜索域。如果既未设置 searchdomain 也未设置 nameserver,创建时将自动使用主机的设置。 - --startup `[[order=]\d+] [,up=\d+] [,down=\d+] `
-
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
启动和关闭行为。顺序是一个非负数,定义了一般的启动顺序。关闭时按相反顺序进行。此外,您可以设置启动或关闭的延迟时间(以秒为单位),指定在启动或关闭下一个虚拟机之前等待的时间。 -
--swap <integer> (0 - N) (default = 512)
--swap <整数>(0 - N)(默认值 = 512) -
Amount of SWAP for the container in MB.
容器的交换空间大小,单位为 MB。 - --tags <string> --tags <字符串>
-
Tags of the Container. This is only meta information.
容器的标签。这只是元信息。 -
--template <boolean> (default = 0)
--template <boolean>(默认 = 0) -
Enable/disable Template.
启用/禁用模板。 - --timezone <string>
-
Time zone to use in the container. If option isn’t set, then nothing will be done. Can be set to host to match the host time zone, or an arbitrary time zone option from /usr/share/zoneinfo/zone.tab
容器中使用的时区。如果未设置此选项,则不会进行任何操作。可以设置为 host 以匹配主机时区,或者设置为/usr/share/zoneinfo/zone.tab 中的任意时区选项 -
--tty <integer> (0 - 6) (default = 2)
--tty <整数> (0 - 6)(默认 = 2) -
Specify the number of tty available to the container
指定容器可用的 tty 数量 -
--unprivileged <boolean> (default = 0)
--unprivileged <布尔值>(默认 = 0) -
Makes the container run as unprivileged user. (Should not be modified manually.)
使容器以非特权用户身份运行。(不应手动修改。) - --unused[n] [volume=]<volume>
-
Reference to unused volumes. This is used internally, and should not be modified manually.
引用未使用的卷。此参数供内部使用,不应手动修改。
pct shutdown <vmid> [OPTIONS]
pct shutdown <vmid> [选项]
Shutdown the container. This will trigger a clean shutdown of the
container, see lxc-stop(1) for details.
关闭容器。这将触发容器的正常关闭,详情请参见 lxc-stop(1)。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--forceStop <boolean> (default = 0)
--forceStop <布尔值>(默认 = 0) -
Make sure the Container stops.
确保容器停止。 -
--timeout <integer> (0 - N) (default = 60)
--timeout <整数> (0 - N) (默认 = 60) -
Wait maximal timeout seconds.
等待最长超时时间(秒)。
pct snapshot <vmid> <snapname> [OPTIONS]
pct snapshot <vmid> <快照名> [选项]
Snapshot a container. 快照一个容器。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <snapname>: <string> <snapname>: <字符串>
-
The name of the snapshot.
快照的名称。 - --description <string>
-
A textual description or comment.
文本描述或注释。
pct start <vmid> [OPTIONS]
pct start <vmid> [选项]
Start the container. 启动容器。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--debug <boolean> (default = 0)
--debug <布尔值>(默认 = 0) -
If set, enables very verbose debug log-level on start.
如果设置,启动时启用非常详细的调试日志级别。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定——只有 root 用户被允许使用此选项。
pct status <vmid> [OPTIONS]
pct status <vmid> [选项]
Show CT status. 显示 CT 状态。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数> (100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format 详细输出格式
pct stop <vmid> [OPTIONS]
pct stop <vmid> [选项]
Stop the container. This will abruptly stop all processes running in the
container.
停止容器。这将立即停止容器中运行的所有进程。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 -
--overrule-shutdown <boolean> (default = 0)
--overrule-shutdown <boolean>(默认值 = 0) -
Try to abort active vzshutdown tasks before stopping.
尝试在停止之前中止正在进行的 vzshutdown 任务。 - --skiplock <boolean>
-
Ignore locks - only root is allowed to use this option.
忽略锁定——只有 root 用户被允许使用此选项。
pct suspend <vmid>
Suspend the container. This is experimental.
挂起容器。这是实验性功能。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct template <vmid> pct 模板 <vmid>
Create a Template. 创建一个模板。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct unlock <vmid>
Unlock the VM. 解锁虚拟机。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
pct unmount <vmid>
Unmount the container’s filesystem.
卸载容器的文件系统。
-
<vmid>: <integer> (100 - 999999999)
<vmid>:<整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。
22.12. pveam - Proxmox VE Appliance Manager
22.12. pveam - Proxmox VE 设备管理器
pveam <COMMAND> [ARGS] [OPTIONS]
pveam <命令> [参数] [选项]
pveam available [OPTIONS]
pveam available [选项]
List available templates.
列出可用的模板。
- --section <mail | system | turnkeylinux>
-
Restrict list to specified section.
将列表限制为指定部分。
pveam download <storage> <template>
Download appliance templates.
下载设备模板。
- <storage>: <storage ID> <storage>:<存储 ID>
-
The storage where the template will be stored
模板将被存储的存储位置 - <template>: <string> <template>:<字符串>
-
The template which will downloaded
将要下载的模板
pveam help [OPTIONS] pveam help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pveam list <storage>
Get list of all templates on storage
获取存储上的所有模板列表
- <storage>: <storage ID> <storage>: <存储 ID>
-
Only list templates on specified storage
仅列出指定存储上的模板
pveam remove <template_path>
pveam remove <模板路径>
Remove a template. 删除一个模板。
-
<template_path>: <string>
<template_path>: <字符串> -
The template to remove.
要删除的模板。
pveam update
Update Container Template Database.
更新容器模板数据库。
22.13. pvecm - Proxmox VE Cluster Manager
22.13. pvecm - Proxmox VE 集群管理器
pvecm <COMMAND> [ARGS] [OPTIONS]
pvecm <命令> [参数] [选项]
pvecm add <hostname> [OPTIONS]
pvecm add <主机名> [选项]
Adds the current node to an existing cluster.
将当前节点添加到现有集群中。
- <hostname>: <string>
-
Hostname (or IP) of an existing cluster member.
现有集群成员的主机名(或 IP)。 - --fingerprint ([A-Fa-f0-9]{2}:){31}[A-Fa-f0-9]{2}
-
Certificate SHA 256 fingerprint.
证书 SHA 256 指纹。 - --force <boolean> --force <布尔值>
-
Do not throw error if node already exists.
如果节点已存在,则不抛出错误。 -
--link[n] [address=]<IP> [,priority=<integer>]
--link[n] [address=]<IP> [,priority=<整数>] -
Address and priority information of a single corosync link. (up to 8 links supported; link0..link7)
单个 corosync 链路的地址和优先级信息。(支持最多 8 条链路;link0..link7) -
--nodeid <integer> (1 - N)
--nodeid <整数>(1 - N) -
Node id for this node.
此节点的节点 ID。 - --use_ssh <boolean> --use_ssh <布尔值>
-
Always use SSH to join, even if peer may do it over API.
始终使用 SSH 加入,即使对等方可能通过 API 进行。 -
--votes <integer> (0 - N)
--votes <整数> (0 - N) -
Number of votes for this node
该节点的投票数
pvecm addnode <node> [OPTIONS]
pvecm addnode <节点> [选项]
Adds a node to the cluster configuration. This call is for internal use.
将节点添加到集群配置中。此调用仅供内部使用。
- <node>: <string>
-
The cluster node name.
集群节点名称。 - --apiversion <integer>
-
The JOIN_API_VERSION of the new node.
新节点的 JOIN_API_VERSION。 - --force <boolean>
-
Do not throw error if node already exists.
如果节点已存在,则不抛出错误。 - --link[n] [address=]<IP> [,priority=<integer>]
-
Address and priority information of a single corosync link. (up to 8 links supported; link0..link7)
单个 corosync 链路的地址和优先级信息。(支持最多 8 条链路;link0..link7) - --new_node_ip <string> --new_node_ip <字符串>
-
IP Address of node to add. Used as fallback if no links are given.
要添加的节点的 IP 地址。如果未提供链路,则作为备用使用。 -
--nodeid <integer> (1 - N)
--nodeid <整数>(1 - N) -
Node id for this node.
此节点的节点 ID。 -
--votes <integer> (0 - N)
--votes <整数> (0 - N) -
Number of votes for this node
此节点的投票数
pvecm apiver
Return the version of the cluster join API available on this node.
返回此节点上可用的集群加入 API 版本。
pvecm create <clustername> [OPTIONS]
pvecm create <clustername> [选项]
Generate new cluster configuration. If no links given, default to local IP
address as link0.
生成新的集群配置。如果未提供链接,默认使用本地 IP 地址作为 link0。
- <clustername>: <string> <clustername>:<字符串>
-
The name of the cluster.
集群的名称。 -
--link[n] [address=]<IP> [,priority=<integer>]
--link[n] [address=]<IP> [,priority=<整数>] -
Address and priority information of a single corosync link. (up to 8 links supported; link0..link7)
单个 corosync 链路的地址和优先级信息。(支持最多 8 条链路;link0..link7) -
--nodeid <integer> (1 - N)
--nodeid <整数>(1 - N) -
Node id for this node.
此节点的节点 ID。 -
--votes <integer> (1 - N)
--votes <整数> (1 - N) -
Number of votes for this node.
此节点的投票数。
pvecm delnode <node> pvecm delnode <节点>
Removes a node from the cluster configuration.
从集群配置中移除一个节点。
- <node>: <string>
-
The cluster node name.
集群节点名称。
pvecm expected <expected>
pvecm 期望 <expected>
Tells corosync a new value of expected votes.
告诉 corosync 预期投票的新值。
-
<expected>: <integer> (1 - N)
<expected>: <整数> (1 - N) -
Expected votes 预期投票数
pvecm help [OPTIONS] pvecm help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pvecm keygen <filename>
Generate new cryptographic key for corosync.
为 corosync 生成新的加密密钥。
- <filename>: <string> <filename>:<字符串>
-
Output file name 输出文件名
pvecm mtunnel [<extra-args>] [OPTIONS]
pvecm mtunnel [<extra-args>] [选项]
Used by VM/CT migration - do not use manually.
由虚拟机/容器迁移使用 - 请勿手动使用。
- <extra-args>: <array> <extra-args>: <数组>
-
Extra arguments as array
额外参数作为数组 -
--get_migration_ip <boolean> (default = 0)
--get_migration_ip <布尔值>(默认 = 0) -
return the migration IP, if configured
返回迁移 IP(如果已配置) -
--migration_network <string>
--migration_network <字符串> -
the migration network used to detect the local migration IP
用于检测本地迁移 IP 的迁移网络 - --run-command <boolean> --run-command <布尔值>
-
Run a command with a tcp socket as standard input. The IP address and port are printed via this command’s stdandard output first, each on a separate line.
使用 TCP 套接字作为标准输入运行命令。该命令的标准输出首先打印 IP 地址和端口,每个占一行。
pvecm nodes pvecm 节点
Displays the local view of the cluster nodes.
显示集群节点的本地视图。
pvecm qdevice remove
Remove a configured QDevice
移除已配置的 QDevice
pvecm qdevice setup <address> [OPTIONS]
pvecm qdevice setup <address> [选项]
Setup the use of a QDevice
设置 QDevice 的使用
- <address>: <string>
-
Specifies the network address of an external corosync QDevice
指定外部 corosync QDevice 的网络地址 - --force <boolean>
-
Do not throw error on possible dangerous operations.
不要在可能的危险操作上抛出错误。 - --network <string>
-
The network which should be used to connect to the external qdevice
用于连接到外部 qdevice 的网络
pvecm status
Displays the local view of the cluster status.
显示集群状态的本地视图。
pvecm updatecerts [OPTIONS]
pvecm updatecerts [选项]
Update node certificates (and generate all needed files/directories).
更新节点证书(并生成所有需要的文件/目录)。
- --force <boolean> --force <布尔值>
-
Force generation of new SSL certificate.
强制生成新的 SSL 证书。 - --silent <boolean>
-
Ignore errors (i.e. when cluster has no quorum).
忽略错误(例如集群没有法定人数时)。 -
--unmerge-known-hosts <boolean> (default = 0)
--unmerge-known-hosts <boolean>(默认值 = 0) -
Unmerge legacy SSH known hosts.
取消合并旧版 SSH 已知主机。
22.14. pvesr - Proxmox VE Storage Replication
22.14. pvesr - Proxmox VE 存储复制
pvesr <COMMAND> [ARGS] [OPTIONS]
pvesr <命令> [参数] [选项]
pvesr create-local-job <id> <target> [OPTIONS]
pvesr create-local-job <id> <目标> [选项]
Create a new replication job
创建一个新的复制任务
- <id>: [1-9][0-9]{2,8}-\d{1,9}
-
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,两者之间用连字符分隔,即 <GUEST>-<JOBNUM>。 - <target>: <string>
-
Target node. 目标节点。
- --comment <string> --comment <字符串>
-
Description. 描述。
- --disable <boolean> --disable <布尔值>
-
Flag to disable/deactivate the entry.
用于禁用/停用该条目。 -
--rate <number> (1 - N)
--rate <数字> (1 - N) -
Rate limit in mbps (megabytes per second) as floating point number.
以浮点数表示的速率限制,单位为 mbps(兆字节每秒)。 - --remove_job <full | local>
-
Mark the replication job for removal. The job will remove all local replication snapshots. When set to full, it also tries to remove replicated volumes on the target. The job then removes itself from the configuration file.
标记复制任务以便移除。该任务将删除所有本地复制快照。设置为 full 时,还会尝试删除目标上的复制卷。然后该任务会从配置文件中删除自身。 -
--schedule <string> (default = */15)
--schedule <string>(默认 = */15) -
Storage replication schedule. The format is a subset of systemd calendar events.
存储复制计划。格式是 systemd 日历事件的一个子集。 - --source <string>
-
For internal use, to detect if the guest was stolen.
供内部使用,用于检测客户机是否被盗。
pvesr delete <id> [OPTIONS]
pvesr delete <id> [选项]
Mark replication job for removal.
标记复制任务以便删除。
-
<id>: [1-9][0-9]{2,8}-\d{1,9}
<id>:[1-9][0-9]{2,8}-\d{1,9} -
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,中间用连字符分隔,即 <GUEST>-<JOBNUM>。 -
--force <boolean> (default = 0)
--force <boolean>(默认值 = 0) -
Will remove the jobconfig entry, but will not cleanup.
将删除 jobconfig 条目,但不会进行清理。 -
--keep <boolean> (default = 0)
--keep <boolean>(默认值 = 0) -
Keep replicated data at target (do not remove).
保留目标处的复制数据(不删除)。
pvesr disable <id>
Disable a replication job.
禁用复制任务。
-
<id>: [1-9][0-9]{2,8}-\d{1,9}
<id>:[1-9][0-9]{2,8}-\d{1,9} -
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个虚拟机 ID 和一个任务编号组成,中间用连字符分隔,即<GUEST>-<JOBNUM>。
pvesr enable <id>
Enable a replication job.
启用复制任务。
-
<id>: [1-9][0-9]{2,8}-\d{1,9}
<id>:[1-9][0-9]{2,8}-\d{1,9} -
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,中间用连字符分隔,即 <GUEST>-<JOBNUM>。
pvesr finalize-local-job <id> [<extra-args>] [OPTIONS]
pvesr finalize-local-job <id> [<extra-args>] [选项]
Finalize a replication job. This removes all replications snapshots with
timestamps different than <last_sync>.
完成复制任务。这将删除所有时间戳与 <last_sync> 不同的复制快照。
-
<id>: [1-9][0-9]{2,8}-\d{1,9}
<id>:[1-9][0-9]{2,8}-\d{1,9} -
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,中间用连字符分隔,即<GUEST>-<JOBNUM>。 - <extra-args>: <array> <extra-args>: <数组>
-
The list of volume IDs to consider.
要考虑的卷 ID 列表。 -
--last_sync <integer> (0 - N)
--last_sync <整数> (0 - N) -
Time (UNIX epoch) of last successful sync. If not specified, all replication snapshots gets removed.
上次成功同步的时间(UNIX 纪元时间)。如果未指定,则所有复制快照将被移除。
pvesr help [OPTIONS] pvesr help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pvesr list
List replication jobs. 列出复制任务。
pvesr prepare-local-job <id> [<extra-args>] [OPTIONS]
pvesr prepare-local-job <id> [<extra-args>] [选项]
Prepare for starting a replication job. This is called on the target node
before replication starts. This call is for internal use, and return a JSON
object on stdout. The method first test if VM <vmid> reside on the local
node. If so, stop immediately. After that the method scans all volume IDs
for snapshots, and removes all replications snapshots with timestamps
different than <last_sync>. It also removes any unused volumes. Returns a
hash with boolean markers for all volumes with existing replication
snapshots.
准备启动复制任务。在复制开始前,此命令在目标节点上调用。此调用仅供内部使用,并在标准输出返回一个 JSON 对象。该方法首先测试虚拟机 <vmid> 是否驻留在本地节点上。如果是,则立即停止。之后,该方法扫描所有快照的卷 ID,并移除所有时间戳与 <last_sync> 不同的复制快照。它还会移除任何未使用的卷。返回一个包含所有存在复制快照的卷的布尔标记的哈希。
- <id>: [1-9][0-9]{2,8}-\d{1,9}
-
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,中间用连字符分隔,即<GUEST>-<JOBNUM>。 - <extra-args>: <array> <extra-args>:<数组>
-
The list of volume IDs to consider.
要考虑的卷 ID 列表。 -
--force <boolean> (default = 0)
--force <布尔值>(默认 = 0) -
Allow to remove all existion volumes (empty volume list).
允许删除所有现有卷(空卷列表)。 -
--last_sync <integer> (0 - N)
--last_sync <整数> (0 - N) -
Time (UNIX epoch) of last successful sync. If not specified, all replication snapshots get removed.
上次成功同步的时间(UNIX 纪元时间)。如果未指定,则删除所有复制快照。 -
--parent_snapname <string>
--parent_snapname <字符串> -
The name of the snapshot.
快照的名称。 - --scan <string>
-
List of storage IDs to scan for stale volumes.
要扫描的存储 ID 列表,用于查找过时的卷。
pvesr read <id>
Read replication job configuration.
读取复制任务配置。
- <id>: [1-9][0-9]{2,8}-\d{1,9}
-
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,中间用连字符分隔,即 <GUEST>-<JOBNUM>。
pvesr run [OPTIONS]
This method is called by the systemd-timer and executes all (or a specific)
sync jobs.
此方法由 systemd-timer 调用,执行所有(或特定的)同步任务。
- --id [1-9][0-9]{2,8}-\d{1,9}
-
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,中间用连字符分隔,即 <GUEST>-<JOBNUM>。 -
--mail <boolean> (default = 0)
--mail <boolean>(默认 = 0) -
Send an email notification in case of a failure.
在发生故障时发送电子邮件通知。 -
--verbose <boolean> (default = 0)
--verbose <boolean>(默认 = 0) -
Print more verbose logs to stdout.
将更详细的日志打印到标准输出。
pvesr schedule-now <id>
Schedule replication job to start as soon as possible.
安排复制任务尽快开始。
- <id>: [1-9][0-9]{2,8}-\d{1,9}
-
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个虚拟机 ID 和一个任务编号组成,中间用连字符分隔,即 <GUEST>-<JOBNUM>。
pvesr set-state <vmid> <state>
Set the job replication state on migration. This call is for internal use.
It will accept the job state as ja JSON obj.
在迁移时设置作业复制状态。此调用仅供内部使用。它将接受作业状态作为 JSON 对象。
-
<vmid>: <integer> (100 - 999999999)
<vmid>: <整数>(100 - 999999999) -
The (unique) ID of the VM.
虚拟机的(唯一)ID。 - <state>: <string> <state>: <字符串>
-
Job state as JSON decoded string.
作业状态,作为解码后的 JSON 字符串。
pvesr status [OPTIONS] pvesr 状态 [选项]
List status of all replication jobs on this node.
列出此节点上所有复制作业的状态。
-
--guest <integer> (100 - 999999999)
--guest <整数> (100 - 999999999) -
Only list replication jobs for this guest.
仅列出此客户机的复制任务。
pvesr update <id> [OPTIONS]
pvesr update <id> [选项]
Update replication job configuration.
更新复制任务配置。
-
<id>: [1-9][0-9]{2,8}-\d{1,9}
<id>:[1-9][0-9]{2,8}-\d{1,9} -
Replication Job ID. The ID is composed of a Guest ID and a job number, separated by a hyphen, i.e. <GUEST>-<JOBNUM>.
复制任务 ID。该 ID 由一个客户机 ID 和一个任务编号组成,中间用连字符分隔,即 <GUEST>-<JOBNUM>。 - --comment <string>
-
Description. 描述。
- --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --digest <string>
-
Prevent changes if current configuration file has a different digest. This can be used to prevent concurrent modifications.
如果当前配置文件的摘要不同,则阻止更改。此功能可用于防止并发修改。 - --disable <boolean>
-
Flag to disable/deactivate the entry.
用于禁用/停用该条目。 -
--rate <number> (1 - N)
--rate <数字> (1 - N) -
Rate limit in mbps (megabytes per second) as floating point number.
以浮点数表示的速率限制,单位为 mbps(兆字节每秒)。 - --remove_job <full | local>
-
Mark the replication job for removal. The job will remove all local replication snapshots. When set to full, it also tries to remove replicated volumes on the target. The job then removes itself from the configuration file.
标记复制任务以便移除。该任务将删除所有本地复制快照。设置为 full 时,还会尝试删除目标上的复制卷。然后该任务会从配置文件中删除自身。 -
--schedule <string> (default = */15)
--schedule <string>(默认 = */15) -
Storage replication schedule. The format is a subset of systemd calendar events.
存储复制计划。格式是 systemd 日历事件的一个子集。 - --source <string>
-
For internal use, to detect if the guest was stolen.
供内部使用,用于检测客户机是否被盗。
22.15. pveum - Proxmox VE User Manager
22.15. pveum - Proxmox VE 用户管理器
pveum <COMMAND> [ARGS] [OPTIONS]
pveum <命令> [参数] [选项]
pveum acl delete <path> --roles <string> [OPTIONS]
pveum acl delete <路径> --roles <字符串> [选项]
Update Access Control List (add or remove permissions).
更新访问控制列表(添加或移除权限)。
- <path>: <string> <path>:<字符串>
-
Access control path 访问控制路径
- --groups <string> --groups <字符串>
-
List of groups. 组列表。
-
--propagate <boolean> (default = 1)
--propagate <boolean>(默认 = 1) -
Allow to propagate (inherit) permissions.
允许传播(继承)权限。 - --roles <string>
-
List of roles. 角色列表。
- --tokens <string>
-
List of API tokens.
API 代币列表。 - --users <string>
-
List of users. 用户列表。
pveum acl list [FORMAT_OPTIONS]
pveum acl list [格式选项]
Get Access Control List (ACLs).
获取访问控制列表(ACL)。
pveum acl modify <path> --roles <string> [OPTIONS]
pveum acl modify <路径> --roles <字符串> [选项]
Update Access Control List (add or remove permissions).
更新访问控制列表(添加或移除权限)。
- <path>: <string> <path>:<字符串>
-
Access control path 访问控制路径
- --groups <string> --groups <字符串>
-
List of groups. 组列表。
-
--propagate <boolean> (default = 1)
--propagate <boolean>(默认 = 1) -
Allow to propagate (inherit) permissions.
允许传播(继承)权限。 - --roles <string>
-
List of roles. 角色列表。
- --tokens <string>
-
List of API tokens.
API 代币列表。 - --users <string>
-
List of users. 用户列表。
pveum acldel
An alias for pveum acl delete.
pveum acl delete 的别名。
pveum aclmod
An alias for pveum acl modify.
pveum acl modify 的别名。
pveum group add <groupid> [OPTIONS]
pveum group add <groupid> [选项]
Create new group. 创建新组。
- <groupid>: <string> <groupid>:<字符串>
-
no description available
无可用描述 - --comment <string>
-
no description available
无可用描述
pveum group delete <groupid>
Delete group. 删除组。
- <groupid>: <string>
-
no description available
无可用描述
pveum group list [FORMAT_OPTIONS]
Group index. 组索引。
pveum group modify <groupid> [OPTIONS]
pveum group modify <groupid> [选项]
Update group data. 更新组数据。
- <groupid>: <string> <groupid>:<字符串>
-
no description available
无可用描述 - --comment <string>
-
no description available
无可用描述
pveum groupadd
An alias for pveum group add.
pveum group add 的别名。
pveum groupdel
An alias for pveum group delete.
pveum group delete 的别名。
pveum groupmod
An alias for pveum group modify.
pveum group mod 的别名。
pveum help [OPTIONS] pveum help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pveum passwd <userid> [OPTIONS]
pveum passwd <userid> [选项]
Change user password. 更改用户密码。
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。 - --confirmation-password <string>
-
The current password of the user performing the change.
执行更改的用户当前密码。
pveum pool add <poolid> [OPTIONS]
pveum pool add <poolid> [选项]
Create new pool. 创建新资源池。
- <poolid>: <string> <poolid>:<字符串>
-
no description available
无可用描述 - --comment <string>
-
no description available
无可用描述
pveum pool delete <poolid>
Delete pool. 删除资源池。
- <poolid>: <string> <poolid>: <字符串>
-
no description available
无可用描述
pveum pool list [OPTIONS] [FORMAT_OPTIONS]
pveum pool list [选项] [格式选项]
List pools or get pool configuration.
列出资源池或获取资源池配置。
- --poolid <string>
-
no description available
无可用描述 - --type <lxc | qemu | storage>
-
no description available
无可用描述Requires option(s): poolid
需要选项:poolid
pveum pool modify <poolid> [OPTIONS]
pveum pool modify <poolid> [选项]
Update pool. 更新资源池。
- <poolid>: <string>
-
no description available
无可用描述 -
--allow-move <boolean> (default = 0)
--allow-move <boolean>(默认 = 0) -
Allow adding a guest even if already in another pool. The guest will be removed from its current pool and added to this one.
允许添加已存在于其他资源池中的虚拟机。该虚拟机将从当前资源池中移除并添加到此资源池。 - --comment <string>
-
no description available
无可用描述 -
--delete <boolean> (default = 0)
--delete <boolean>(默认值 = 0) -
Remove the passed VMIDs and/or storage IDs instead of adding them.
移除传入的 VMID 和/或存储 ID,而不是添加它们。 - --storage <string>
-
List of storage IDs to add or remove from this pool.
要添加到或从此池中移除的存储 ID 列表。 - --vms <string>
-
List of guest VMIDs to add or remove from this pool.
要添加到或从此池中移除的虚拟机 VMID 列表。
pveum realm add <realm> --type <string> [OPTIONS]
pveum realm add <realm> --type <string> [选项]
Add an authentication server.
添加一个认证服务器。
- <realm>: <string>
-
Authentication domain ID
认证域 ID - --acr-values ^[^\x00-\x1F\x7F <>#"]*$
-
Specifies the Authentication Context Class Reference values that theAuthorization Server is being requested to use for the Auth Request.
指定请求授权服务器在认证请求中使用的认证上下文类引用值。 -
--autocreate <boolean> (default = 0)
--autocreate <boolean>(默认 = 0) -
Automatically create users if they do not exist.
如果用户不存在,则自动创建用户。 - --base_dn <string> --base_dn <字符串>
-
LDAP base domain name
LDAP 基础域名 - --bind_dn <string> --bind_dn <字符串>
-
LDAP bind domain name
LDAP 绑定域名 -
--capath <string> (default = /etc/ssl/certs)
--capath <string>(默认 = /etc/ssl/certs) -
Path to the CA certificate store
CA 证书存储路径 -
--case-sensitive <boolean> (default = 1)
--case-sensitive <boolean>(默认 = 1) -
username is case-sensitive
用户名区分大小写 - --cert <string>
-
Path to the client certificate
客户端证书路径 - --certkey <string>
-
Path to the client certificate key
客户端证书密钥路径 -
--check-connection <boolean> (default = 0)
--check-connection <boolean>(默认 = 0) -
Check bind connection to the server.
检查与服务器的绑定连接。 - --client-id <string>
-
OpenID Client ID OpenID 客户端 ID
- --client-key <string>
-
OpenID Client Key OpenID 客户端密钥
- --comment <string>
-
Description. 描述。
- --default <boolean>
-
Use this as default realm
将此用作默认域 - --domain \S+
-
AD domain name AD 域名
- --filter <string>
-
LDAP filter for user sync.
用于用户同步的 LDAP 过滤器。 -
--group_classes <string> (default = groupOfNames, group, univentionGroup, ipausergroup)
--group_classes <string>(默认 = groupOfNames, group, univentionGroup, ipausergroup) -
The objectclasses for groups.
组的对象类。 - --group_dn <string>
-
LDAP base domain name for group sync. If not set, the base_dn will be used.
用于组同步的 LDAP 基础域名。如果未设置,将使用 base_dn。 - --group_filter <string>
-
LDAP filter for group sync.
用于组同步的 LDAP 过滤器。 - --group_name_attr <string>
-
LDAP attribute representing a groups name. If not set or found, the first value of the DN will be used as name.
表示组名的 LDAP 属性。如果未设置或未找到,则使用 DN 的第一个值作为名称。 -
--groups-autocreate <boolean> (default = 0)
--groups-autocreate <boolean> (默认 = 0) -
Automatically create groups if they do not exist.
如果组不存在,则自动创建组。 - --groups-claim (?^:[A-Za-z0-9\.\-_]+)
-
OpenID claim used to retrieve groups with.
用于检索组的 OpenID 声明。 -
--groups-overwrite <boolean> (default = 0)
--groups-overwrite <boolean> (默认 = 0) -
All groups will be overwritten for the user on login.
用户登录时,所有组将被覆盖。 - --issuer-url <string> --issuer-url <字符串>
-
OpenID Issuer Url OpenID 发行者网址
-
--mode <ldap | ldap+starttls | ldaps> (default = ldap)
--mode <ldap | ldap+starttls | ldaps>(默认 = ldap) -
LDAP protocol mode. LDAP 协议模式。
- --password <string>
-
LDAP bind password. Will be stored in /etc/pve/priv/realm/<REALM>.pw.
LDAP 绑定密码。将存储在 /etc/pve/priv/realm/<REALM>.pw 中。 - --port <integer> (1 - 65535)
-
Server port. 服务器端口。
- --prompt (?:none|login|consent|select_account|\S+)
-
Specifies whether the Authorization Server prompts the End-User for reauthentication and consent.
指定授权服务器是否提示终端用户进行重新认证和同意。 -
--query-userinfo <boolean> (default = 1)
--query-userinfo <boolean> (默认 = 1) -
Enables querying the userinfo endpoint for claims values.
启用查询 userinfo 端点以获取声明值。 -
--scopes <string> (default = email profile)
--scopes <string>(默认 = email profile) -
Specifies the scopes (user details) that should be authorized and returned, for example email or profile.
指定应授权并返回的范围(用户详细信息),例如 email 或 profile。 - --secure <boolean>
-
Use secure LDAPS protocol. DEPRECATED: use mode instead.
使用安全的 LDAPS 协议。已弃用:请改用 mode。 - --server1 <string> --server1 <字符串>
-
Server IP address (or DNS name)
服务器 IP 地址(或 DNS 名称) - --server2 <string> --server2 <字符串>
-
Fallback Server IP address (or DNS name)
备用服务器 IP 地址(或 DNS 名称) - --sslversion <tlsv1 | tlsv1_1 | tlsv1_2 | tlsv1_3>
-
LDAPS TLS/SSL version. It’s not recommended to use version older than 1.2!
LDAPS TLS/SSL 版本。不建议使用低于 1.2 的版本! - --sync-defaults-options [enable-new=<1|0>] [,full=<1|0>] [,purge=<1|0>] [,remove-vanished=([acl];[properties];[entry])|none] [,scope=<users|groups|both>]
-
The default options for behavior of synchronizations.
同步行为的默认选项。 - --sync_attributes \w+=[^,]+(,\s*\w+=[^,]+)*
-
Comma separated list of key=value pairs for specifying which LDAP attributes map to which PVE user field. For example, to map the LDAP attribute mail to PVEs email, write email=mail. By default, each PVE user field is represented by an LDAP attribute of the same name.
用逗号分隔的键值对列表,用于指定哪些 LDAP 属性映射到哪个 PVE 用户字段。例如,要将 LDAP 属性 mail 映射到 PVE 的 email 字段,写成 email=mail。默认情况下,每个 PVE 用户字段由同名的 LDAP 属性表示。 - --tfa type=<TFATYPE> [,digits=<COUNT>] [,id=<ID>] [,key=<KEY>] [,step=<SECONDS>] [,url=<URL>]
-
Use Two-factor authentication.
使用双因素认证。 - --type <ad | ldap | openid | pam | pve>
-
Realm type. 领域类型。
- --user_attr \S{2,}
-
LDAP user attribute name
LDAP 用户属性名称 -
--user_classes <string> (default = inetorgperson, posixaccount, person, user)
--user_classes <string>(默认 = inetorgperson, posixaccount, person, user) -
The objectclasses for users.
用户的对象类。 - --username-claim <string>
-
OpenID claim used to generate the unique username.
用于生成唯一用户名的 OpenID 声明。 -
--verify <boolean> (default = 0)
--verify <boolean>(默认 = 0) -
Verify the server’s SSL certificate
验证服务器的 SSL 证书
pveum realm delete <realm>
Delete an authentication server.
删除一个认证服务器。
- <realm>: <string>
-
Authentication domain ID
认证域 ID
pveum realm list [FORMAT_OPTIONS]
Authentication domain index.
认证域索引。
pveum realm modify <realm> [OPTIONS]
pveum realm modify <realm> [选项]
Update authentication server settings.
更新认证服务器设置。
- <realm>: <string> <realm>:<字符串>
-
Authentication domain ID
认证域 ID - --acr-values ^[^\x00-\x1F\x7F <>#"]*$
-
Specifies the Authentication Context Class Reference values that theAuthorization Server is being requested to use for the Auth Request.
指定请求授权服务器在认证请求中使用的认证上下文类引用值。 -
--autocreate <boolean> (default = 0)
--autocreate <boolean>(默认 = 0) -
Automatically create users if they do not exist.
如果用户不存在,则自动创建用户。 - --base_dn <string> --base_dn <字符串>
-
LDAP base domain name
LDAP 基础域名 - --bind_dn <string> --bind_dn <字符串>
-
LDAP bind domain name
LDAP 绑定域名 -
--capath <string> (default = /etc/ssl/certs)
--capath <string>(默认 = /etc/ssl/certs) -
Path to the CA certificate store
CA 证书存储路径 -
--case-sensitive <boolean> (default = 1)
--case-sensitive <boolean>(默认 = 1) -
username is case-sensitive
用户名区分大小写 - --cert <string>
-
Path to the client certificate
客户端证书路径 - --certkey <string>
-
Path to the client certificate key
客户端证书密钥路径 -
--check-connection <boolean> (default = 0)
--check-connection <boolean>(默认 = 0) -
Check bind connection to the server.
检查与服务器的绑定连接。 - --client-id <string>
-
OpenID Client ID OpenID 客户端 ID
- --client-key <string>
-
OpenID Client Key OpenID 客户端密钥
- --comment <string>
-
Description. 描述。
- --default <boolean>
-
Use this as default realm
将此用作默认领域 - --delete <string>
-
A list of settings you want to delete.
您想要删除的一组设置。 - --digest <string>
-
Prevent changes if current configuration file has a different digest. This can be used to prevent concurrent modifications.
如果当前配置文件的摘要不同,则阻止更改。此功能可用于防止并发修改。 - --domain \S+
-
AD domain name AD 域名
- --filter <string>
-
LDAP filter for user sync.
用于用户同步的 LDAP 过滤器。 -
--group_classes <string> (default = groupOfNames, group, univentionGroup, ipausergroup)
--group_classes <string>(默认 = groupOfNames, group, univentionGroup, ipausergroup) -
The objectclasses for groups.
组的对象类。 - --group_dn <string>
-
LDAP base domain name for group sync. If not set, the base_dn will be used.
用于组同步的 LDAP 基础域名。如果未设置,将使用 base_dn。 - --group_filter <string>
-
LDAP filter for group sync.
用于组同步的 LDAP 过滤器。 - --group_name_attr <string>
-
LDAP attribute representing a groups name. If not set or found, the first value of the DN will be used as name.
表示组名的 LDAP 属性。如果未设置或未找到,则使用 DN 的第一个值作为名称。 -
--groups-autocreate <boolean> (default = 0)
--groups-autocreate <boolean> (默认 = 0) -
Automatically create groups if they do not exist.
如果组不存在,则自动创建组。 - --groups-claim (?^:[A-Za-z0-9\.\-_]+)
-
OpenID claim used to retrieve groups with.
用于检索组的 OpenID 声明。 -
--groups-overwrite <boolean> (default = 0)
--groups-overwrite <boolean> (默认 = 0) -
All groups will be overwritten for the user on login.
用户登录时,所有组将被覆盖。 - --issuer-url <string> --issuer-url <字符串>
-
OpenID Issuer Url OpenID 发行者网址
-
--mode <ldap | ldap+starttls | ldaps> (default = ldap)
--mode <ldap | ldap+starttls | ldaps>(默认 = ldap) -
LDAP protocol mode. LDAP 协议模式。
- --password <string>
-
LDAP bind password. Will be stored in /etc/pve/priv/realm/<REALM>.pw.
LDAP 绑定密码。将存储在 /etc/pve/priv/realm/<REALM>.pw 中。 - --port <integer> (1 - 65535)
-
Server port. 服务器端口。
- --prompt (?:none|login|consent|select_account|\S+)
-
Specifies whether the Authorization Server prompts the End-User for reauthentication and consent.
指定授权服务器是否提示终端用户进行重新认证和同意。 -
--query-userinfo <boolean> (default = 1)
--query-userinfo <boolean> (默认 = 1) -
Enables querying the userinfo endpoint for claims values.
启用查询 userinfo 端点以获取声明值。 -
--scopes <string> (default = email profile)
--scopes <string>(默认 = email profile) -
Specifies the scopes (user details) that should be authorized and returned, for example email or profile.
指定应授权并返回的范围(用户详细信息),例如 email 或 profile。 - --secure <boolean>
-
Use secure LDAPS protocol. DEPRECATED: use mode instead.
使用安全的 LDAPS 协议。已弃用:请改用 mode。 - --server1 <string> --server1 <字符串>
-
Server IP address (or DNS name)
服务器 IP 地址(或 DNS 名称) - --server2 <string> --server2 <字符串>
-
Fallback Server IP address (or DNS name)
备用服务器 IP 地址(或 DNS 名称) - --sslversion <tlsv1 | tlsv1_1 | tlsv1_2 | tlsv1_3>
-
LDAPS TLS/SSL version. It’s not recommended to use version older than 1.2!
LDAPS TLS/SSL 版本。不建议使用低于 1.2 的版本! - --sync-defaults-options [enable-new=<1|0>] [,full=<1|0>] [,purge=<1|0>] [,remove-vanished=([acl];[properties];[entry])|none] [,scope=<users|groups|both>]
-
The default options for behavior of synchronizations.
同步行为的默认选项。 - --sync_attributes \w+=[^,]+(,\s*\w+=[^,]+)*
-
Comma separated list of key=value pairs for specifying which LDAP attributes map to which PVE user field. For example, to map the LDAP attribute mail to PVEs email, write email=mail. By default, each PVE user field is represented by an LDAP attribute of the same name.
用逗号分隔的键值对列表,用于指定哪些 LDAP 属性映射到哪个 PVE 用户字段。例如,要将 LDAP 属性 mail 映射到 PVE 的 email 字段,写成 email=mail。默认情况下,每个 PVE 用户字段由同名的 LDAP 属性表示。 - --tfa type=<TFATYPE> [,digits=<COUNT>] [,id=<ID>] [,key=<KEY>] [,step=<SECONDS>] [,url=<URL>]
-
Use Two-factor authentication.
使用双因素认证。 - --user_attr \S{2,}
-
LDAP user attribute name
LDAP 用户属性名称 -
--user_classes <string> (default = inetorgperson, posixaccount, person, user)
--user_classes <string> (默认 = inetorgperson, posixaccount, person, user) -
The objectclasses for users.
用户的对象类。 -
--verify <boolean> (default = 0)
--verify <boolean>(默认值 = 0) -
Verify the server’s SSL certificate
验证服务器的 SSL 证书
pveum realm sync <realm> [OPTIONS]
pveum realm sync <realm> [选项]
Syncs users and/or groups from the configured LDAP to user.cfg. NOTE:
Synced groups will have the name name-$realm, so make sure those groups
do not exist to prevent overwriting.
将配置的 LDAP 中的用户和/或组同步到 user.cfg。注意:同步的组名称将为 name-$realm,请确保这些组不存在以防止被覆盖。
- <realm>: <string>
-
Authentication domain ID
认证域 ID -
--dry-run <boolean> (default = 0)
--dry-run <boolean> (默认 = 0) -
If set, does not write anything.
如果设置,则不写入任何内容。 -
--enable-new <boolean> (default = 1)
--enable-new <boolean>(默认值 = 1) -
Enable newly synced users immediately.
立即启用新同步的用户。 - --full <boolean>
-
DEPRECATED: use remove-vanished instead. If set, uses the LDAP Directory as source of truth, deleting users or groups not returned from the sync and removing all locally modified properties of synced users. If not set, only syncs information which is present in the synced data, and does not delete or modify anything else.
已弃用:请改用 remove-vanished。如果设置,使用 LDAP 目录作为权威来源,删除同步中未返回的用户或组,并移除所有本地修改的同步用户属性。如果未设置,则仅同步同步数据中存在的信息,不删除或修改其他内容。 - --purge <boolean>
-
DEPRECATED: use remove-vanished instead. Remove ACLs for users or groups which were removed from the config during a sync.
已废弃:请改用 remove-vanished。移除在同步过程中从配置中删除的用户或组的 ACL。 -
--remove-vanished ([acl];[properties];[entry])|none (default = none)
--remove-vanished ([acl];[properties];[entry])|none (默认 = none) -
A semicolon-separated list of things to remove when they or the user vanishes during a sync. The following values are possible: entry removes the user/group when not returned from the sync. properties removes the set properties on existing user/group that do not appear in the source (even custom ones). acl removes acls when the user/group is not returned from the sync. Instead of a list it also can be none (the default).
一个以分号分隔的列表,用于指定在同步过程中当用户消失时要移除的内容。可能的值包括:entry 表示当同步未返回该用户/组时移除该用户/组;properties 表示移除在源中未出现的现有用户/组的设置属性(包括自定义属性);acl 表示当同步未返回该用户/组时移除其 ACL。该参数也可以设置为 none(默认值)。 - --scope <both | groups | users>
-
Select what to sync.
选择要同步的内容。
pveum role add <roleid> [OPTIONS]
Create new role. 创建新角色。
- <roleid>: <string>
-
no description available
无可用描述 - --privs <string>
-
no description available
无可用描述
pveum role delete <roleid>
Delete role. 删除角色。
- <roleid>: <string>
-
no description available
无可用描述
pveum role list [FORMAT_OPTIONS]
pveum role list [格式选项]
Role index. 角色索引。
pveum role modify <roleid> [OPTIONS]
pveum role modify <roleid> [选项]
Update an existing role. 更新现有角色。
- <roleid>: <string>
-
no description available
无可用描述 - --append <boolean>
-
no description available
无可用描述Requires option(s): privs
需要选项:privs - --privs <string> --privs <字符串>
-
no description available
无可用描述
pveum roleadd
An alias for pveum role add.
pveum role add 的别名。
pveum roledel
An alias for pveum role delete.
pveum role delete 的别名。
pveum rolemod
An alias for pveum role modify.
pveum role modify 的别名。
pveum ticket <username> [OPTIONS]
pveum ticket <username> [选项]
Create or verify authentication ticket.
创建或验证认证票据。
- <username>: <string> <username>:<字符串>
-
User name 用户名
-
--new-format <boolean> (default = 1)
--new-format <boolean>(默认 = 1) -
This parameter is now ignored and assumed to be 1.
此参数现已被忽略,默认视为 1。 - --otp <string> --otp <字符串>
-
One-time password for Two-factor authentication.
一次性密码,用于双因素认证。 - --path <string> --path <字符串>
-
Verify ticket, and check if user have access privs on path
验证票据,并检查用户是否对路径具有访问权限Requires option(s): privs
需要选项:privs - --privs <string>
-
Verify ticket, and check if user have access privs on path
验证票据,并检查用户是否对路径具有访问权限Requires option(s): path 需要选项:path - --realm <string>
-
You can optionally pass the realm using this parameter. Normally the realm is simply added to the username <username>@<realm>.
您可以选择使用此参数传递域。通常,域会直接添加到用户名中,格式为 <username>@<realm>。 - --tfa-challenge <string>
-
The signed TFA challenge string the user wants to respond to.
用户想要响应的已签名的双因素认证挑战字符串。
pveum user add <userid> [OPTIONS]
Create new user. 创建新用户。
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。 - --comment <string>
-
no description available
无可用描述 - --email <string>
-
no description available
无可用描述 -
--enable <boolean> (default = 1)
--enable <boolean>(默认 = 1) -
Enable the account (default). You can set this to 0 to disable the account
启用账户(默认)。您可以设置为 0 以禁用账户 -
--expire <integer> (0 - N)
--expire <整数>(0 - N) -
Account expiration date (seconds since epoch). 0 means no expiration date.
账户过期日期(自纪元以来的秒数)。0 表示无过期日期。 - --firstname <string> --firstname <字符串>
-
no description available
无可用描述 - --groups <string>
-
no description available
无可用描述 - --keys [0-9a-zA-Z!=]{0,4096}
-
Keys for two factor auth (yubico).
两因素认证密钥(yubico)。 - --lastname <string>
-
no description available
无可用描述 - --password <string>
-
Initial password. 初始密码。
pveum user delete <userid>
Delete user. 删除用户。
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。
pveum user list [OPTIONS] [FORMAT_OPTIONS]
pveum user list [选项] [格式选项]
User index. 用户索引。
- --enabled <boolean> --enabled <布尔值>
-
Optional filter for enable property.
启用属性的可选过滤器。 -
--full <boolean> (default = 0)
--full <boolean>(默认值 = 0) -
Include group and token information.
包含组和代币信息。
pveum user modify <userid> [OPTIONS]
pveum user modify <userid> [选项]
Update user configuration.
更新用户配置。
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整用户 ID,格式为 name@realm。 - --append <boolean>
-
no description available
无可用描述Requires option(s): groups
需要选项:groups - --comment <string>
-
no description available
无可用描述 - --email <string>
-
no description available
无可用描述 -
--enable <boolean> (default = 1)
--enable <boolean> (默认 = 1) -
Enable the account (default). You can set this to 0 to disable the account
启用账户(默认)。您可以设置为 0 以禁用账户 -
--expire <integer> (0 - N)
--expire <整数> (0 - N) -
Account expiration date (seconds since epoch). 0 means no expiration date.
账户过期时间(自纪元以来的秒数)。0 表示无过期时间。 - --firstname <string> --firstname <字符串>
-
no description available
无可用描述 - --groups <string>
-
no description available
无可用描述 - --keys [0-9a-zA-Z!=]{0,4096}
-
Keys for two factor auth (yubico).
双因素认证(yubico)的密钥。 - --lastname <string> --lastname <字符串>
-
no description available
无可用描述
pveum user permissions [<userid>] [OPTIONS] [FORMAT_OPTIONS]
pveum 用户权限 [<用户 ID>] [选项] [格式选项]
Retrieve effective permissions of given user/token.
检索指定用户/代币的有效权限。
- <userid>: (?^:^(?^:[^\s:/]+)\@(?^:[A-Za-z][A-Za-z0-9\.\-_]+)(?:!(?^:[A-Za-z][A-Za-z0-9\.\-_]+))?$)
-
User ID or full API token ID
用户 ID 或完整的 API 代币 ID - --path <string>
-
Only dump this specific path, not the whole tree.
仅导出此特定路径,而非整个树。
pveum user tfa delete <userid> [OPTIONS]
pveum user tfa delete <userid> [选项]
Delete TFA entries from a user.
从用户中删除 TFA 条目。
- <userid>: <string> <userid>:<字符串>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。 - --id <string>
-
The TFA ID, if none provided, all TFA entries will be deleted.
TFA ID,如果未提供,则所有 TFA 条目将被删除。
pveum user tfa list [<userid>]
List TFA entries. 列出 TFA 条目。
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。
pveum user tfa unlock <userid>
Unlock a user’s TFA authentication.
解锁用户的双因素认证(TFA)。
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。
pveum user token add <userid> <tokenid> [OPTIONS] [FORMAT_OPTIONS]
pveum user token add <userid> <tokenid> [选项] [格式选项]
Generate a new API token for a specific user. NOTE: returns API token
value, which needs to be stored as it cannot be retrieved afterwards!
为特定用户生成一个新的 API 代币。注意:返回 API 代币值,需妥善保存,之后无法再次获取!
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。 - <tokenid>: (?^:[A-Za-z][A-Za-z0-9\.\-_]+)
-
User-specific token identifier.
用户特定的代币标识符。 - --comment <string>
-
no description available
无可用描述 -
--expire <integer> (0 - N) (default = same as user)
--expire <integer> (0 - N) (默认 = 与用户相同) -
API token expiration date (seconds since epoch). 0 means no expiration date.
API 代币过期时间(自纪元以来的秒数)。0 表示无过期时间。 -
--privsep <boolean> (default = 1)
--privsep <boolean>(默认值 = 1) -
Restrict API token privileges with separate ACLs (default), or give full privileges of corresponding user.
通过单独的 ACL 限制 API 代币权限(默认),或赋予对应用户的全部权限。
pveum user token delete <userid> <tokenid> [FORMAT_OPTIONS]
Remove API token for a specific user.
删除特定用户的 API 代币。
- <userid>: <string>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。 - <tokenid>: (?^:[A-Za-z][A-Za-z0-9\.\-_]+)
-
User-specific token identifier.
用户特定的代币标识符。
pveum user token list <userid> [FORMAT_OPTIONS]
pveum user token list <userid> [格式选项]
Get user API tokens. 获取用户 API 代币。
- <userid>: <string> <userid>:<字符串>
-
Full User ID, in the name@realm format.
完整用户 ID,格式为 name@realm。
pveum user token modify <userid> <tokenid> [OPTIONS] [FORMAT_OPTIONS]
pveum user token modify <userid> <tokenid> [选项] [格式选项]
Update API token for a specific user.
更新特定用户的 API 代币。
- <userid>: <string> <userid>:<字符串>
-
Full User ID, in the name@realm format.
完整的用户 ID,格式为 name@realm。 - <tokenid>: (?^:[A-Za-z][A-Za-z0-9\.\-_]+)
-
User-specific token identifier.
用户特定的代币标识符。 - --comment <string>
-
no description available
无可用描述 -
--expire <integer> (0 - N) (default = same as user)
--expire <整数> (0 - N) (默认 = 与用户相同) -
API token expiration date (seconds since epoch). 0 means no expiration date.
API 代币过期时间(自纪元以来的秒数)。0 表示无过期时间。 -
--privsep <boolean> (default = 1)
--privsep <布尔值> (默认 = 1) -
Restrict API token privileges with separate ACLs (default), or give full privileges of corresponding user.
使用独立的访问控制列表限制 API 代币权限(默认),或赋予对应用户的全部权限。
pveum user token permissions <userid> <tokenid> [OPTIONS] [FORMAT_OPTIONS]
pveum 用户代币权限 <userid> <tokenid> [选项] [格式选项]
Retrieve effective permissions of given token.
检索指定代币的有效权限。
- <userid>: <string> <userid>:<字符串>
-
Full User ID, in the name@realm format.
完整用户 ID,格式为 name@realm。 - <tokenid>: (?^:[A-Za-z][A-Za-z0-9\.\-_]+)
-
User-specific token identifier.
用户特定的代币标识符。 - --path <string>
-
Only dump this specific path, not the whole tree.
仅导出此特定路径,而非整个树。
pveum user token remove
An alias for pveum user token delete.
pveum user token delete 的别名。
pveum useradd
An alias for pveum user add.
pveum user add 的别名。
pveum userdel
An alias for pveum user delete.
pveum user delete 的别名。
pveum usermod
An alias for pveum user modify.
pveum user modify 的别名。
22.16. vzdump - Backup Utility for VMs and Containers
22.16. vzdump - 虚拟机和容器备份工具
vzdump help vzdump 帮助
vzdump {<vmid>} [OPTIONS]
vzdump {<vmid>} [选项]
Create backup. 创建备份。
- <vmid>: <string>
-
The ID of the guest system you want to backup.
您想要备份的客户机系统的 ID。 -
--all <boolean> (default = 0)
--all <boolean>(默认 = 0) -
Backup all known guest systems on this host.
备份此主机上所有已知的客户机系统。 -
--bwlimit <integer> (0 - N) (default = 0)
--bwlimit <整数> (0 - N) (默认 = 0) -
Limit I/O bandwidth (in KiB/s).
限制 I/O 带宽(以 KiB/s 为单位)。 -
--compress <0 | 1 | gzip | lzo | zstd> (default = 0)
--compress <0 | 1 | gzip | lzo | zstd> (默认 = 0) -
Compress dump file. 压缩转储文件。
- --dumpdir <string>
-
Store resulting files to specified directory.
将生成的文件存储到指定目录。 - --exclude <string>
-
Exclude specified guest systems (assumes --all)
排除指定的客户机系统(假设使用了 --all) - --exclude-path <array> --exclude-path <数组>
-
Exclude certain files/directories (shell globs). Paths starting with / are anchored to the container’s root, other paths match relative to each subdirectory.
排除某些文件/目录(shell 通配符)。以 / 开头的路径锚定到容器根目录,其他路径相对于每个子目录匹配。 -
--fleecing [[enabled=]<1|0>] [,storage=<storage ID>]
--fleecing [[enabled=]<1|0>] [,storage=<存储 ID>] -
Options for backup fleecing (VM only).
备份剥离选项(仅限虚拟机)。 -
--ionice <integer> (0 - 8) (default = 7)
--ionice <整数> (0 - 8) (默认 = 7) -
Set IO priority when using the BFQ scheduler. For snapshot and suspend mode backups of VMs, this only affects the compressor. A value of 8 means the idle priority is used, otherwise the best-effort priority is used with the specified value.
使用 BFQ 调度器时设置 IO 优先级。对于虚拟机的快照和挂起模式备份,这仅影响压缩器。值为 8 表示使用空闲优先级,否则使用指定值的尽力而为优先级。 - --job-id \S+
-
The ID of the backup job. If set, the backup-job metadata field of the backup notification will be set to this value. Only root@pam can set this parameter.
备份任务的 ID。如果设置,备份通知的备份任务元数据字段将被设置为此值。只有 root@pam 可以设置此参数。 -
--lockwait <integer> (0 - N) (default = 180)
--lockwait <整数> (0 - N) (默认 = 180) -
Maximal time to wait for the global lock (minutes).
等待全局锁的最长时间(分钟)。 -
--mailnotification <always | failure> (default = always)
--mailnotification <always | failure> (默认 = always) -
Deprecated: use notification targets/matchers instead. Specify when to send a notification mail
已弃用:请改用通知目标/匹配器。指定何时发送通知邮件。 - --mailto <string> --mailto <字符串>
-
Deprecated: Use notification targets/matchers instead. Comma-separated list of email addresses or users that should receive email notifications.
已弃用:请改用通知目标/匹配器。以逗号分隔的电子邮件地址或用户列表,接收邮件通知。 -
--maxfiles <integer> (1 - N)
--maxfiles <整数> (1 - N) -
Deprecated: use prune-backups instead. Maximal number of backup files per guest system.
已弃用:请改用 prune-backups。每个客户系统的最大备份文件数量。 -
--mode <snapshot | stop | suspend> (default = snapshot)
--mode <snapshot | stop | suspend>(默认 = snapshot) -
Backup mode. 备份模式。
- --node <string>
-
Only run if executed on this node.
仅在此节点上执行时运行。 - --notes-template <string>
-
Template string for generating notes for the backup(s). It can contain variables which will be replaced by their values. Currently supported are {\{\cluster}}, {\{\guestname}}, {\{\node}}, and {\{\vmid}}, but more might be added in the future. Needs to be a single line, newline and backslash need to be escaped as \n and \\ respectively.
用于生成备份备注的模板字符串。它可以包含将被其值替换的变量。目前支持 {\{\cluster}}、{\{\guestname}}、{\{\node}} 和 {\{\vmid}},未来可能会添加更多。必须为单行,换行符和反斜杠需要分别转义为 \n 和 \\。Requires option(s): storage
需要选项:storage -
--notification-mode <auto | legacy-sendmail | notification-system> (default = auto)
--notification-mode <auto | legacy-sendmail | notification-system>(默认 = auto) -
Determine which notification system to use. If set to legacy-sendmail, vzdump will consider the mailto/mailnotification parameters and send emails to the specified address(es) via the sendmail command. If set to notification-system, a notification will be sent via PVE’s notification system, and the mailto and mailnotification will be ignored. If set to auto (default setting), an email will be sent if mailto is set, and the notification system will be used if not.
确定使用哪种通知系统。如果设置为 legacy-sendmail,vzdump 将考虑 mailto/mailnotification 参数,并通过 sendmail 命令向指定的地址发送邮件。如果设置为 notification-system,则通过 PVE 的通知系统发送通知,mailto 和 mailnotification 参数将被忽略。如果设置为 auto(默认设置),则如果设置了 mailto,将发送电子邮件;如果未设置,则使用通知系统。 -
--notification-policy <always | failure | never> (default = always)
--notification-policy <always | failure | never>(默认 = always) -
Deprecated: Do not use
已弃用:请勿使用 - --notification-target <string>
-
Deprecated: Do not use
已弃用:请勿使用 - --pbs-change-detection-mode <data | legacy | metadata>
-
PBS mode used to detect file changes and switch encoding format for container backups.
用于检测文件更改并切换容器备份编码格式的 PBS 模式。 -
--performance [max-workers=<integer>] [,pbs-entries-max=<integer>]
--performance [max-workers=<整数>] [,pbs-entries-max=<整数>] -
Other performance-related settings.
其他与性能相关的设置。 -
--pigz <integer> (default = 0)
--pigz <整数>(默认值 = 0) -
Use pigz instead of gzip when N>0. N=1 uses half of cores, N>1 uses N as thread count.
当 N>0 时使用 pigz 替代 gzip。N=1 使用一半的核心数,N>1 使用 N 作为线程数。 - --pool <string> --pool <字符串>
-
Backup all known guest systems included in the specified pool.
备份指定池中所有已知的客户系统。 - --protected <boolean>
-
If true, mark backup(s) as protected.
如果为真,则将备份标记为受保护。Requires option(s): storage
需要选项:storage -
--prune-backups [keep-all=<1|0>] [,keep-daily=<N>] [,keep-hourly=<N>] [,keep-last=<N>] [,keep-monthly=<N>] [,keep-weekly=<N>] [,keep-yearly=<N>] (default = keep-all=1)
--prune-backups [keep-all=<1|0>] [,keep-daily=<N>] [,keep-hourly=<N>] [,keep-last=<N>] [,keep-monthly=<N>] [,keep-weekly=<N>] [,keep-yearly=<N>](默认 = keep-all=1) -
Use these retention options instead of those from the storage configuration.
使用这些保留选项替代存储配置中的选项。 -
--quiet <boolean> (default = 0)
--quiet <boolean>(默认 = 0) -
Be quiet. 保持安静。
-
--remove <boolean> (default = 1)
--remove <布尔值>(默认 = 1) -
Prune older backups according to prune-backups.
根据 prune-backups 修剪较旧的备份。 - --script <string> --script <字符串>
-
Use specified hook script.
使用指定的钩子脚本。 -
--stdexcludes <boolean> (default = 1)
--stdexcludes <布尔值>(默认 = 1) -
Exclude temporary files and logs.
排除临时文件和日志。 - --stdout <boolean> --stdout <布尔值>
-
Write tar to stdout, not to a file.
将 tar 写入标准输出,而不是写入文件。 -
--stop <boolean> (default = 0)
--stop <boolean>(默认值 = 0) -
Stop running backup jobs on this host.
停止此主机上正在运行的备份任务。 -
--stopwait <integer> (0 - N) (default = 10)
--stopwait <integer>(0 - N)(默认值 = 10) -
Maximal time to wait until a guest system is stopped (minutes).
等待客户机系统停止的最长时间(分钟)。 - --storage <storage ID>
-
Store resulting file to this storage.
将生成的文件存储到此存储。 - --tmpdir <string>
-
Store temporary files to specified directory.
将临时文件存储到指定目录。 -
--zstd <integer> (default = 1)
--zstd <整数>(默认值 = 1) -
Zstd threads. N=0 uses half of the available cores, if N is set to a value bigger than 0, N is used as thread count.
Zstd 线程数。N=0 时使用可用核心数的一半,如果 N 设置为大于 0 的值,则 N 作为线程数使用。
22.17. ha-manager - Proxmox VE HA Manager
22.17. ha-manager - Proxmox VE 高可用管理器
ha-manager <COMMAND> [ARGS] [OPTIONS]
ha-manager <命令> [参数] [选项]
ha-manager add <sid> [OPTIONS]
ha-manager add <sid> [选项]
Create a new HA resource.
创建一个新的高可用资源。
- <sid>: <type>:<name> <sid>: <类型>:<名称>
-
HA resource ID. This consists of a resource type followed by a resource specific name, separated with colon (example: vm:100 / ct:100). For virtual machines and containers, you can simply use the VM or CT id as a shortcut (example: 100).
高可用资源 ID。由资源类型和资源特定名称组成,中间用冒号分隔(例如:vm:100 / ct:100)。对于虚拟机和容器,可以直接使用虚拟机或容器的 ID 作为快捷方式(例如:100)。 - --comment <string>
-
Description. 描述。
- --group <string>
-
The HA group identifier.
HA 组标识符。 -
--max_relocate <integer> (0 - N) (default = 1)
--max_relocate <整数> (0 - N) (默认 = 1) -
Maximal number of service relocate tries when a service failes to start.
服务启动失败时,最大服务迁移尝试次数。 -
--max_restart <integer> (0 - N) (default = 1)
--max_restart <整数> (0 - N) (默认 = 1) -
Maximal number of tries to restart the service on a node after its start failed.
服务启动失败后,在节点上重启服务的最大尝试次数。 -
--state <disabled | enabled | ignored | started | stopped> (default = started)
--state <disabled | enabled | ignored | started | stopped>(默认 = started) -
Requested resource state.
请求的资源状态。 - --type <ct | vm>
-
Resource type. 资源类型。
ha-manager config [OPTIONS]
ha-manager 配置 [选项]
List HA resources. 列出高可用资源。
- --type <ct | vm>
-
Only list resources of specific type
仅列出特定类型的资源
ha-manager crm-command migrate <sid> <node>
Request resource migration (online) to another node.
请求将资源(在线)迁移到另一个节点。
- <sid>: <type>:<name>
-
HA resource ID. This consists of a resource type followed by a resource specific name, separated with colon (example: vm:100 / ct:100). For virtual machines and containers, you can simply use the VM or CT id as a shortcut (example: 100).
HA 资源 ID。由资源类型和资源特定名称组成,中间用冒号分隔(例如:vm:100 / ct:100)。对于虚拟机和容器,可以直接使用虚拟机或容器的 ID 作为快捷方式(例如:100)。 - <node>: <string>
-
Target node. 目标节点。
ha-manager crm-command node-maintenance disable <node>
Change the node-maintenance request state.
更改节点维护请求状态。
- <node>: <string>
-
The cluster node name.
集群节点名称。
ha-manager crm-command node-maintenance enable <node>
Change the node-maintenance request state.
更改节点维护请求状态。
- <node>: <string>
-
The cluster node name.
集群节点名称。
ha-manager crm-command relocate <sid> <node>
Request resource relocatzion to another node. This stops the service on the
old node, and restarts it on the target node.
请求将资源迁移到另一个节点。这会停止旧节点上的服务,并在目标节点上重新启动该服务。
- <sid>: <type>:<name>
-
HA resource ID. This consists of a resource type followed by a resource specific name, separated with colon (example: vm:100 / ct:100). For virtual machines and containers, you can simply use the VM or CT id as a shortcut (example: 100).
HA 资源 ID。由资源类型和资源特定名称组成,中间用冒号分隔(例如:vm:100 / ct:100)。对于虚拟机和容器,可以直接使用虚拟机或容器的 ID 作为快捷方式(例如:100)。 - <node>: <string>
-
Target node. 目标节点。
ha-manager crm-command stop <sid> <timeout>
Request the service to be stopped.
请求停止该服务。
- <sid>: <type>:<name>
-
HA resource ID. This consists of a resource type followed by a resource specific name, separated with colon (example: vm:100 / ct:100). For virtual machines and containers, you can simply use the VM or CT id as a shortcut (example: 100).
HA 资源 ID。由资源类型和资源特定名称组成,中间用冒号分隔(例如:vm:100 / ct:100)。对于虚拟机和容器,可以直接使用虚拟机或容器的 ID 作为快捷方式(例如:100)。 - <timeout>: <integer> (0 - N)
-
Timeout in seconds. If set to 0 a hard stop will be performed.
超时时间(秒)。如果设置为 0,将执行强制停止。
ha-manager groupadd <group> --nodes <string> [OPTIONS]
Create a new HA group.
创建一个新的高可用组。
- <group>: <string>
-
The HA group identifier.
HA 组标识符。 - --comment <string>
-
Description. 描述。
- --nodes <node>[:<pri>]{,<node>[:<pri>]}*
-
List of cluster node names with optional priority.
集群节点名称列表,可选优先级。 -
--nofailback <boolean> (default = 0)
--nofailback <boolean>(默认值 = 0) -
The CRM tries to run services on the node with the highest priority. If a node with higher priority comes online, the CRM migrates the service to that node. Enabling nofailback prevents that behavior.
CRM 会尝试在优先级最高的节点上运行服务。如果优先级更高的节点上线,CRM 会将服务迁移到该节点。启用 nofailback 可防止此行为。 -
--restricted <boolean> (default = 0)
--restricted <boolean> (默认 = 0) -
Resources bound to restricted groups may only run on nodes defined by the group.
绑定到受限组的资源只能在该组定义的节点上运行。 - --type <group>
-
Group type. 组类型。
ha-manager groupconfig
Get HA groups. 获取高可用组。
ha-manager groupremove <group>
Delete ha group configuration.
删除高可用组配置。
- <group>: <string>
-
The HA group identifier.
HA 组标识符。
ha-manager groupset <group> [OPTIONS]
Update ha group configuration.
更新 HA 组配置。
- <group>: <string>
-
The HA group identifier.
HA 组标识符。 - --comment <string>
-
Description. 描述。
- --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --digest <string>
-
Prevent changes if current configuration file has a different digest. This can be used to prevent concurrent modifications.
如果当前配置文件的摘要不同,则阻止更改。此功能可用于防止并发修改。 - --nodes <node>[:<pri>]{,<node>[:<pri>]}*
-
List of cluster node names with optional priority.
集群节点名称列表,可选优先级。 -
--nofailback <boolean> (default = 0)
--nofailback <boolean>(默认值 = 0) -
The CRM tries to run services on the node with the highest priority. If a node with higher priority comes online, the CRM migrates the service to that node. Enabling nofailback prevents that behavior.
CRM 会尝试在优先级最高的节点上运行服务。如果优先级更高的节点上线,CRM 会将服务迁移到该节点。启用 nofailback 可防止此行为。 -
--restricted <boolean> (default = 0)
--restricted <boolean>(默认值 = 0) -
Resources bound to restricted groups may only run on nodes defined by the group.
绑定到受限组的资源只能在该组定义的节点上运行。
ha-manager help [OPTIONS]
ha-manager help [选项]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
ha-manager migrate
An alias for ha-manager crm-command migrate.
ha-manager crm-command migrate 的别名。
ha-manager relocate
An alias for ha-manager crm-command relocate.
ha-manager crm-command relocate 的别名。
ha-manager remove <sid>
Delete resource configuration.
删除资源配置。
- <sid>: <type>:<name>
-
HA resource ID. This consists of a resource type followed by a resource specific name, separated with colon (example: vm:100 / ct:100). For virtual machines and containers, you can simply use the VM or CT id as a shortcut (example: 100).
HA 资源 ID。它由资源类型和资源特定名称组成,中间用冒号分隔(例如:vm:100 / ct:100)。对于虚拟机和容器,你也可以直接使用虚拟机或容器的 ID 作为快捷方式(例如:100)。
ha-manager set <sid> [OPTIONS]
ha-manager set <sid> [选项]
Update resource configuration.
更新资源配置。
- <sid>: <type>:<name> <sid>:<类型>:<名称>
-
HA resource ID. This consists of a resource type followed by a resource specific name, separated with colon (example: vm:100 / ct:100). For virtual machines and containers, you can simply use the VM or CT id as a shortcut (example: 100).
HA 资源 ID。由资源类型和资源特定名称组成,中间用冒号分隔(例如:vm:100 / ct:100)。对于虚拟机和容器,可以直接使用虚拟机或容器的 ID 作为快捷方式(例如:100)。 - --comment <string>
-
Description. 描述。
- --delete <string>
-
A list of settings you want to delete.
您想要删除的设置列表。 - --digest <string>
-
Prevent changes if current configuration file has a different digest. This can be used to prevent concurrent modifications.
如果当前配置文件的摘要不同,则阻止更改。此功能可用于防止并发修改。 - --group <string>
-
The HA group identifier.
HA 组标识符。 -
--max_relocate <integer> (0 - N) (default = 1)
--max_relocate <整数> (0 - N) (默认 = 1) -
Maximal number of service relocate tries when a service failes to start.
服务启动失败时,最大服务迁移尝试次数。 -
--max_restart <integer> (0 - N) (default = 1)
--max_restart <整数> (0 - N) (默认 = 1) -
Maximal number of tries to restart the service on a node after its start failed.
服务启动失败后,在节点上重启服务的最大尝试次数。 -
--state <disabled | enabled | ignored | started | stopped> (default = started)
--state <disabled | enabled | ignored | started | stopped>(默认 = started) -
Requested resource state.
请求的资源状态。
ha-manager status [OPTIONS]
ha-manager status [选项]
Display HA manger status.
显示 HA 管理器状态。
-
--verbose <boolean> (default = 0)
--verbose <boolean>(默认 = 0) -
Verbose output. Include complete CRM and LRM status (JSON).
详细输出。包含完整的 CRM 和 LRM 状态(JSON 格式)。
23. Appendix B: Service Daemons
23. 附录 B:服务守护进程
23.1. pve-firewall - Proxmox VE Firewall Daemon
23.1. pve-firewall - Proxmox VE 防火墙守护进程
pve-firewall <COMMAND> [ARGS] [OPTIONS]
pve-firewall <命令> [参数] [选项]
pve-firewall compile pve-firewall 编译
Compile and print firewall rules. This is useful for testing.
编译并打印防火墙规则。这对于测试非常有用。
pve-firewall help [OPTIONS]
pve-firewall 帮助 [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pve-firewall localnet
Print information about local network.
打印本地网络信息。
pve-firewall restart
Restart the Proxmox VE firewall service.
重启 Proxmox VE 防火墙服务。
pve-firewall simulate [OPTIONS]
pve-firewall simulate [选项]
Simulate firewall rules. This does not simulates the kernel routing
table, but simply assumes that routing from source zone to destination zone
is possible.
模拟防火墙规则。这不会模拟内核路由表,而是假设从源区域到目标区域的路由是可行的。
- --dest <string> --dest <字符串>
-
Destination IP address. 目标 IP 地址。
- --dport <integer> --dport <整数>
-
Destination port. 目标端口。
-
--from (host|outside|vm\d+|ct\d+|([a-zA-Z][a-zA-Z0-9]{0,9})/(\S+)) (default = outside)
--from (host|outside|vm\d+|ct\d+|([a-zA-Z][a-zA-Z0-9]{0,9})/(\S+))(默认 = outside) -
Source zone. 源区域。
-
--protocol (tcp|udp) (default = tcp)
--protocol (tcp|udp)(默认 = tcp) -
Protocol. 协议。
- --source <string>
-
Source IP address. 源 IP 地址。
- --sport <integer> --sport <整数>
-
Source port. 源端口。
-
--to (host|outside|vm\d+|ct\d+|([a-zA-Z][a-zA-Z0-9]{0,9})/(\S+)) (default = host)
--to (host|outside|vm\d+|ct\d+|([a-zA-Z][a-zA-Z0-9]{0,9})/(\S+))(默认 = host) -
Destination zone. 目标区域。
-
--verbose <boolean> (default = 0)
--verbose <boolean>(默认 = 0) -
Verbose output. 详细输出。
pve-firewall start [OPTIONS]
pve-firewall start [选项]
Start the Proxmox VE firewall service.
启动 Proxmox VE 防火墙服务。
-
--debug <boolean> (default = 0)
--debug <boolean>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台运行
pve-firewall status pve-firewall 状态
Get firewall status. 获取防火墙状态。
pve-firewall stop
Stop the Proxmox VE firewall service. Note, stopping actively removes all
Proxmox VE related iptable rules rendering the host potentially
unprotected.
停止 Proxmox VE 防火墙服务。注意,停止时会主动移除所有与 Proxmox VE 相关的 iptable 规则,可能导致主机处于无保护状态。
23.2. pvedaemon - Proxmox VE API Daemon
23.2. pvedaemon - Proxmox VE API 守护进程
pvedaemon <COMMAND> [ARGS] [OPTIONS]
pvedaemon <命令> [参数] [选项]
pvedaemon help [OPTIONS] pvedaemon help [选项]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pvedaemon restart pvedaemon 重启
Restart the daemon (or start if not running).
重启守护进程(如果未运行则启动)。
pvedaemon start [OPTIONS]
pvedaemon start [选项]
Start the daemon. 启动守护进程。
-
--debug <boolean> (default = 0)
--debug <布尔值>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台
pvedaemon status pvedaemon 状态
Get daemon status. 获取守护进程状态。
pvedaemon stop 停止 pvedaemon
Stop the daemon. 停止守护进程。
23.3. pveproxy - Proxmox VE API Proxy Daemon
23.3. pveproxy - Proxmox VE API 代理守护进程
pveproxy <COMMAND> [ARGS] [OPTIONS]
pveproxy <命令> [参数] [选项]
pveproxy help [OPTIONS] pveproxy help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pveproxy restart pveproxy 重启
Restart the daemon (or start if not running).
重启守护进程(如果未运行则启动)。
pveproxy start [OPTIONS] pveproxy 启动 [选项]
Start the daemon. 启动守护进程。
-
--debug <boolean> (default = 0)
--debug <boolean>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台
pveproxy status pveproxy 状态
Get daemon status. 获取守护进程状态。
pveproxy stop pveproxy 停止
Stop the daemon. 停止守护进程。
23.4. pvestatd - Proxmox VE Status Daemon
23.4. pvestatd - Proxmox VE 状态守护进程
pvestatd <COMMAND> [ARGS] [OPTIONS]
pvestatd <命令> [参数] [选项]
pvestatd help [OPTIONS] pvestatd help [选项]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean>
-
Verbose output format. 详细输出格式。
pvestatd restart pvestatd 重启
Restart the daemon (or start if not running).
重启守护进程(如果未运行则启动)。
pvestatd start [OPTIONS] pvestatd start [选项]
Start the daemon. 启动守护进程。
-
--debug <boolean> (default = 0)
--debug <布尔值>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台
pvestatd status pvestatd 状态
Get daemon status. 获取守护进程状态。
pvestatd stop 停止 pvestatd
Stop the daemon. 停止守护进程。
23.5. spiceproxy - SPICE Proxy Service
23.5. spiceproxy - SPICE 代理服务
spiceproxy <COMMAND> [ARGS] [OPTIONS]
spiceproxy <命令> [参数] [选项]
spiceproxy help [OPTIONS]
spiceproxy help [选项]
Get help about specified command.
获取指定命令的帮助。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
spiceproxy restart spiceproxy 重启
Restart the daemon (or start if not running).
重启守护进程(如果未运行则启动)。
spiceproxy start [OPTIONS]
spiceproxy 启动 [选项]
Start the daemon. 启动守护进程。
-
--debug <boolean> (default = 0)
--debug <boolean>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台
spiceproxy status spiceproxy 状态
Get daemon status. 获取守护进程状态。
spiceproxy stop spiceproxy 停止
Stop the daemon. 停止守护进程。
23.6. pmxcfs - Proxmox Cluster File System
23.6. pmxcfs - Proxmox 集群文件系统
pmxcfs [OPTIONS] pmxcfs [选项]
Help Options: 帮助选项:
- -h, --help
-
Show help options 显示帮助选项
Application Options: 应用选项:
- -d, --debug -d,--debug
-
Turn on debug messages
开启调试信息 - -f, --foreground -f,--foreground
-
Do not daemonize server
不要将服务器守护进程化 - -l, --local -l,--local
-
Force local mode (ignore corosync.conf, force quorum)
强制本地模式(忽略 corosync.conf,强制仲裁)
This service is usually started and managed using systemd toolset. The
service is called pve-cluster.
该服务通常使用 systemd 工具集启动和管理。该服务名为 pve-cluster。
systemctl start pve-cluster
systemctl stop pve-cluster
systemctl status pve-cluster
23.7. pve-ha-crm - Cluster Resource Manager Daemon
23.7. pve-ha-crm - 集群资源管理守护进程
pve-ha-crm <COMMAND> [ARGS] [OPTIONS]
pve-ha-crm <命令> [参数] [选项]
pve-ha-crm help [OPTIONS]
pve-ha-crm help [选项]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pve-ha-crm start [OPTIONS]
pve-ha-crm start [选项]
Start the daemon. 启动守护进程。
-
--debug <boolean> (default = 0)
--debug <布尔值>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台运行
pve-ha-crm status pve-ha-crm 状态
Get daemon status. 获取守护进程状态。
pve-ha-crm stop pve-ha-crm 停止
Stop the daemon. 停止守护进程。
23.8. pve-ha-lrm - Local Resource Manager Daemon
23.8. pve-ha-lrm - 本地资源管理守护进程
pve-ha-lrm <COMMAND> [ARGS] [OPTIONS]
pve-ha-lrm <命令> [参数] [选项]
pve-ha-lrm help [OPTIONS]
pve-ha-lrm help [选项]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pve-ha-lrm start [OPTIONS]
pve-ha-lrm 启动 [选项]
Start the daemon. 启动守护进程。
-
--debug <boolean> (default = 0)
--debug <布尔值>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台运行
pve-ha-lrm status pve-ha-lrm 状态
Get daemon status. 获取守护进程状态。
pve-ha-lrm stop pve-ha-lrm 停止
Stop the daemon. 停止守护进程。
23.9. pvescheduler - Proxmox VE Scheduler Daemon
23.9. pvescheduler - Proxmox VE 调度守护进程
pvescheduler <COMMAND> [ARGS] [OPTIONS]
pvescheduler <命令> [参数] [选项]
pvescheduler help [OPTIONS]
pvescheduler help [选项]
Get help about specified command.
获取指定命令的帮助信息。
- --extra-args <array> --extra-args <数组>
-
Shows help for a specific command
显示特定命令的帮助信息 - --verbose <boolean> --verbose <布尔值>
-
Verbose output format. 详细输出格式。
pvescheduler restart pvescheduler 重启
Restart the daemon (or start if not running).
重启守护进程(如果未运行则启动)。
pvescheduler start [OPTIONS]
pvescheduler 启动 [选项]
Start the daemon. 启动守护进程。
-
--debug <boolean> (default = 0)
--debug <boolean>(默认 = 0) -
Debug mode - stay in foreground
调试模式 - 保持在前台运行
pvescheduler status pvescheduler 状态
Get daemon status. 获取守护进程状态。
pvescheduler stop pvescheduler 停止
Stop the daemon. 停止守护进程。
24. Appendix C: Configuration Files
24. 附录 C:配置文件
24.1. General 24.1. 概述
Most configuration files in Proxmox VE reside on the
shared cluster file system mounted at /etc/pve. There are
exceptions, like the node-specific configuration file for backups in
/etc/vzdump.conf.
Proxmox VE 中的大多数配置文件位于挂载在 /etc/pve 的共享集群文件系统上。但也有例外,比如位于 /etc/vzdump.conf 的节点特定备份配置文件。
Usually, the properties in a configuration file are derived from the JSON Schema
that is also used for the associated API endpoints.
通常,配置文件中的属性来源于 JSON Schema,该 Schema 也用于相关的 API 端点。
24.1.1. Casing of Property Names
24.1.1. 属性名称的大小写
Historically, longer properties (and sub-properties) often used snake_case, or
were written as one word. This can likely be attributed to the Proxmox VE stack being
developed mostly in the programming language Perl, where access to properties
using kebab-case requires additional quotes, as well as less style enforcement
during early development, so different developers used different conventions.
历史上,较长的属性(及子属性)通常使用 snake_case,或者写成一个单词。这很可能是因为 Proxmox VE 堆栈主要使用编程语言 Perl 开发,而在 Perl 中,使用 kebab-case 访问属性需要额外的引号,加之早期开发阶段风格约束较少,不同开发者采用了不同的命名规范。
For new properties, kebab-case is the preferred way and it is planned to
introduce aliases for existing snake_case properties, and in the long term,
switch over to kebab-case for the API, CLI and in-use configuration files
while maintaining backwards-compatibility when restoring a configuration.
对于新属性,推荐使用 kebab-case,并计划为现有的 snake_case 属性引入别名,长期来看,将在 API、CLI 和正在使用的配置文件中切换到 kebab-case,同时在恢复配置时保持向后兼容。
24.2. Datacenter Configuration
24.2. 数据中心配置
The file /etc/pve/datacenter.cfg is a configuration file for
Proxmox VE. It contains cluster wide default values used by all nodes.
文件 /etc/pve/datacenter.cfg 是 Proxmox VE 的配置文件。它包含所有节点使用的集群范围的默认值。
24.2.1. File Format 24.2.1. 文件格式
The file uses a simple colon separated key/value format. Each line has
the following format:
该文件使用简单的冒号分隔的键/值格式。每行的格式如下:
OPTION: value
Blank lines in the file are ignored, and lines starting with a #
character are treated as comments and are also ignored.
文件中的空行会被忽略,以#字符开头的行被视为注释,也会被忽略。
24.2.2. Options 24.2.2. 选项
- bwlimit: [clone=<LIMIT>] [,default=<LIMIT>] [,migration=<LIMIT>] [,move=<LIMIT>] [,restore=<LIMIT>]
-
Set I/O bandwidth limit for various operations (in KiB/s).
为各种操作设置 I/O 带宽限制(以 KiB/s 为单位)。- clone=<LIMIT>
-
bandwidth limit in KiB/s for cloning disks
克隆磁盘的带宽限制,单位为 KiB/s - default=<LIMIT>
-
default bandwidth limit in KiB/s
默认带宽限制,单位为 KiB/s - migration=<LIMIT>
-
bandwidth limit in KiB/s for migrating guests (including moving local disks)
迁移虚拟机(包括移动本地磁盘)的带宽限制,单位为 KiB/s - move=<LIMIT>
-
bandwidth limit in KiB/s for moving disks
移动磁盘的带宽限制,单位为 KiB/s - restore=<LIMIT>
-
bandwidth limit in KiB/s for restoring guests from backups
从备份恢复虚拟机的带宽限制,单位为 KiB/s
- consent-text: <string>
-
Consent text that is displayed before logging in.
登录前显示的同意文本。 -
console: <applet | html5 | vv | xtermjs>
控制台: <applet | html5 | vv | xtermjs> -
Select the default Console viewer. You can either use the builtin java applet (VNC; deprecated and maps to html5), an external virt-viewer comtatible application (SPICE), an HTML5 based vnc viewer (noVNC), or an HTML5 based console client (xtermjs). If the selected viewer is not available (e.g. SPICE not activated for the VM), the fallback is noVNC.
选择默认的控制台查看器。您可以使用内置的 Java 小程序(VNC;已弃用并映射到 html5)、外部兼容 virt-viewer 的应用程序(SPICE)、基于 HTML5 的 VNC 查看器(noVNC)或基于 HTML5 的控制台客户端(xtermjs)。如果所选查看器不可用(例如,虚拟机未启用 SPICE),则回退使用 noVNC。 - crs: [ha=<basic|static>] [,ha-rebalance-on-start=<1|0>]
-
Cluster resource scheduling settings.
集群资源调度设置。-
ha=<basic | static> (default = basic)
ha=<basic | static>(默认 = basic) -
Configures how the HA manager should select nodes to start or recover services. With basic, only the number of services is used, with static, static CPU and memory configuration of services is considered.
配置 HA 管理器如何选择节点来启动或恢复服务。使用 basic 时,仅考虑服务数量;使用 static 时,会考虑服务的静态 CPU 和内存配置。 -
ha-rebalance-on-start=<boolean> (default = 0)
ha-rebalance-on-start=<boolean>(默认 = 0) -
Set to use CRS for selecting a suited node when a HA services request-state changes from stop to start.
设置为在高可用服务请求状态从停止变为启动时,使用 CRS 选择合适的节点。
-
ha=<basic | static> (default = basic)
- description: <string> 描述:<string>
-
Datacenter description. Shown in the web-interface datacenter notes panel. This is saved as comment inside the configuration file.
数据中心描述。在网页界面数据中心备注面板中显示。此内容作为注释保存在配置文件中。 - email_from: <string> email_from:<string>
-
Specify email address to send notification from (default is root@$hostname)
指定用于发送通知的电子邮件地址(默认是 root@$hostname) -
fencing: <both | hardware | watchdog> (default = watchdog)
fencing: <both | hardware | watchdog>(默认 = watchdog) -
Set the fencing mode of the HA cluster. Hardware mode needs a valid configuration of fence devices in /etc/pve/ha/fence.cfg. With both all two modes are used.
设置 HA 集群的围栏模式。硬件模式需要在 /etc/pve/ha/fence.cfg 中有有效的围栏设备配置。选择 both 时,两种模式都会被使用。hardware and both are EXPERIMENTAL & WIP
hardware 和 both 是实验性功能,仍在开发中 -
ha: shutdown_policy=<enum>
ha: shutdown_policy=<枚举> -
Cluster wide HA settings.
集群范围的高可用性设置。-
shutdown_policy=<conditional | failover | freeze | migrate> (default = conditional)
shutdown_policy=<conditional | failover | freeze | migrate>(默认 = conditional) -
Describes the policy for handling HA services on poweroff or reboot of a node. Freeze will always freeze services which are still located on the node on shutdown, those services won’t be recovered by the HA manager. Failover will not mark the services as frozen and thus the services will get recovered to other nodes, if the shutdown node does not come up again quickly (< 1min). conditional chooses automatically depending on the type of shutdown, i.e., on a reboot the service will be frozen but on a poweroff the service will stay as is, and thus get recovered after about 2 minutes. Migrate will try to move all running services to another node when a reboot or shutdown was triggered. The poweroff process will only continue once no running services are located on the node anymore. If the node comes up again, the service will be moved back to the previously powered-off node, at least if no other migration, reloaction or recovery took place.
描述在节点关机或重启时处理高可用性服务的策略。Freeze 会在关机时始终冻结仍位于该节点上的服务,这些服务不会被高可用管理器恢复。Failover 不会将服务标记为冻结,因此如果关机节点未能快速重新启动(< 1 分钟),服务将被恢复到其他节点。Conditional 会根据关机类型自动选择策略,即重启时服务会被冻结,但关机时服务保持原状,因此大约两分钟后会被恢复。Migrate 会在触发重启或关机时尝试将所有运行中的服务迁移到其他节点。只有当节点上不再有运行中的服务时,关机过程才会继续。如果节点重新启动,服务将被迁回之前关机的节点,前提是没有发生其他迁移、重新定位或恢复操作。
-
shutdown_policy=<conditional | failover | freeze | migrate> (default = conditional)
- http_proxy: http://.*
-
Specify external http proxy which is used for downloads (example: http://username:password@host:port/)
指定用于下载的外部 http 代理(示例:http://username:password@host:port/) - keyboard: <da | de | de-ch | en-gb | en-us | es | fi | fr | fr-be | fr-ca | fr-ch | hu | is | it | ja | lt | mk | nl | no | pl | pt | pt-br | sl | sv | tr>
-
Default keybord layout for vnc server.
VNC 服务器的默认键盘布局。 - language: <ar | ca | da | de | en | es | eu | fa | fr | he | hr | it | ja | ka | kr | nb | nl | nn | pl | pt_BR | ru | sl | sv | tr | ukr | zh_CN | zh_TW>
-
Default GUI language. 默认的图形用户界面语言。
-
mac_prefix: <string> (default = BC:24:11)
mac_prefix: <string> (默认 = BC:24:11) -
Prefix for the auto-generated MAC addresses of virtual guests. The default BC:24:11 is the Organizationally Unique Identifier (OUI) assigned by the IEEE to Proxmox Server Solutions GmbH for a MAC Address Block Large (MA-L). You’re allowed to use this in local networks, i.e., those not directly reachable by the public (e.g., in a LAN or NAT/Masquerading).
虚拟客户机自动生成的 MAC 地址前缀。默认的 BC:24:11 是 IEEE 分配给 Proxmox Server Solutions GmbH 的组织唯一标识符(OUI),用于 MAC 地址大块(MA-L)。您可以在本地网络中使用此前缀,即那些不直接被公网访问的网络(例如局域网或 NAT/伪装网络)。
Note that when you run multiple cluster that (partially) share the networks of their virtual guests, it’s highly recommended that you extend the default MAC prefix, or generate a custom (valid) one, to reduce the chance of MAC collisions. For example, add a separate extra hexadecimal to the Proxmox OUI for each cluster, like BC:24:11:0 for the first, BC:24:11:1 for the second, and so on.
Alternatively, you can also separate the networks of the guests logically, e.g., by using VLANs.
请注意,当您运行多个集群且它们的虚拟客户机网络部分共享时,强烈建议您扩展默认的 MAC 前缀,或生成一个自定义(有效的)前缀,以减少 MAC 地址冲突的可能性。例如,为每个集群在 Proxmox 的 OUI 后添加一个额外的十六进制数,第一个集群为 BC:24:11:0,第二个为 BC:24:11:1,依此类推。或者,您也可以通过逻辑上分离客户机的网络,例如使用 VLAN 来实现。
+
For publicly accessible guests it’s recommended that you get your own OUI from the IEEE registered or coordinate with your, or your hosting providers, network admins.
+ 对于公开访问的客户机,建议您从 IEEE 注册处获取自己的 OUI,或与您或您的托管服务提供商的网络管理员协调。
-
max_workers: <integer> (1 - N)
max_workers: <整数>(1 - N) -
Defines how many workers (per node) are maximal started on actions like stopall VMs or task from the ha-manager.
定义在执行如停止所有虚拟机或 ha-manager 任务等操作时,每个节点最多启动的工作线程数。 - migration: [type=]<secure|insecure> [,network=<CIDR>]
-
For cluster wide migration settings.
用于集群范围的迁移设置。- network=<CIDR>
-
CIDR of the (sub) network that is used for migration.
用于迁移的(子)网络的 CIDR。 -
type=<insecure | secure> (default = secure)
type=<不安全 | 安全>(默认 = 安全) -
Migration traffic is encrypted using an SSH tunnel by default. On secure, completely private networks this can be disabled to increase performance.
迁移流量默认通过 SSH 隧道加密。在安全的完全私有网络中,可以禁用此功能以提高性能。
-
migration_unsecure: <boolean>
migration_unsecure: <布尔值> -
Migration is secure using SSH tunnel by default. For secure private networks you can disable it to speed up migration. Deprecated, use the migration property instead!
迁移默认通过 SSH 隧道进行安全保护。对于安全的私有网络,可以禁用此功能以加快迁移速度。已弃用,请改用 migration 属性! -
next-id: [lower=<integer>] [,upper=<integer>]
next-id: [lower=<整数>] [,upper=<整数>] -
Control the range for the free VMID auto-selection pool.
控制用于自动选择空闲 VMID 的范围池。-
lower=<integer> (default = 100)
lower=<整数>(默认值 = 100) -
Lower, inclusive boundary for free next-id API range.
空闲 next-id API 范围的下限(包含该值)。 -
upper=<integer> (default = 1000000)
upper=<整数>(默认值 = 1000000) -
Upper, exclusive boundary for free next-id API range.
upper,free next-id API 范围的排他上限。
-
lower=<integer> (default = 100)
- notify: [fencing=<always|never>] [,package-updates=<auto|always|never>] [,replication=<always|never>] [,target-fencing=<TARGET>] [,target-package-updates=<TARGET>] [,target-replication=<TARGET>]
-
Cluster-wide notification settings.
集群范围的通知设置。-
fencing=<always | never>
fencing=<始终 | 从不> -
UNUSED - Use datacenter notification settings instead.
未使用 - 请改用数据中心通知设置。 -
package-updates=<always | auto | never> (default = auto)
package-updates=<始终 | 自动 | 从不>(默认 = 自动) -
DEPRECATED: Use datacenter notification settings instead. Control how often the daily update job should send out notifications:
已弃用:请改用数据中心通知设置。控制每日更新任务发送通知的频率:-
auto daily for systems with a valid subscription, as those are assumed to be production-ready and thus should know about pending updates.
对于拥有有效订阅的系统,自动每日检查,因为这些系统被认为是生产就绪的,因此应了解待处理的更新。 -
always every update, if there are new pending updates.
如果有新的待处理更新,则始终每次更新时通知。 -
never never send a notification for new pending updates.
从不发送有关新待处理更新的通知。
-
- replication=<always | never>
-
UNUSED - Use datacenter notification settings instead.
未使用 - 请改用数据中心通知设置。 - target-fencing=<TARGET>
-
UNUSED - Use datacenter notification settings instead.
未使用 - 请改用数据中心通知设置。 - target-package-updates=<TARGET>
-
UNUSED - Use datacenter notification settings instead.
未使用 - 请改用数据中心通知设置。 - target-replication=<TARGET>
-
UNUSED - Use datacenter notification settings instead.
未使用 - 请改用数据中心通知设置。
-
fencing=<always | never>
- registered-tags: <tag>[;<tag>...]
-
A list of tags that require a Sys.Modify on / to set and delete. Tags set here that are also in user-tag-access also require Sys.Modify.
需要在 / 上进行 Sys.Modify 权限以设置和删除的标签列表。此处设置的标签如果也出现在 user-tag-access 中,同样需要 Sys.Modify 权限。 -
tag-style: [case-sensitive=<1|0>] [,color-map=<tag>:<hex-color>[:<hex-color-for-text>][;<tag>=...]] [,ordering=<config|alphabetical>] [,shape=<enum>]
tag-style: [case-sensitive=<1|0>] [,color-map=<tag>:<十六进制颜色>[:<文本用十六进制颜色>][;<tag>=...]] [,ordering=<配置|字母顺序>] [,shape=<枚举>] -
Tag style options. 标签样式选项。
-
case-sensitive=<boolean> (default = 0)
case-sensitive=<布尔值>(默认 = 0) -
Controls if filtering for unique tags on update should check case-sensitive.
控制在更新时对唯一标签进行过滤时是否区分大小写。 -
color-map=<tag>:<hex-color>[:<hex-color-for-text>][;<tag>=...]
color-map=<标签>:<十六进制颜色>[:<文本用十六进制颜色>][;<标签>=...] -
Manual color mapping for tags (semicolon separated).
标签的手动颜色映射(以分号分隔)。 -
ordering=<alphabetical | config> (default = alphabetical)
ordering=<alphabetical | config>(默认 = alphabetical) -
Controls the sorting of the tags in the web-interface and the API update.
控制网页界面和 API 更新中标签的排序。 -
shape=<circle | dense | full | none> (default = circle)
shape=<circle | dense | full | none>(默认 = circle) -
Tag shape for the web ui tree. full draws the full tag. circle draws only a circle with the background color. dense only draws a small rectancle (useful when many tags are assigned to each guest).none disables showing the tags.
网页 UI 树的标签形状。full 绘制完整标签。circle 仅绘制带背景色的圆圈。dense 仅绘制一个小矩形(当每个客户机分配许多标签时非常有用)。none 禁用标签显示。
-
case-sensitive=<boolean> (default = 0)
- u2f: [appid=<APPID>] [,origin=<URL>]
-
u2f
- appid=<APPID>
-
U2F AppId URL override. Defaults to the origin.
U2F AppId URL 覆盖。默认为来源。 - origin=<URL>
-
U2F Origin override. Mostly useful for single nodes with a single URL.
U2F 源覆盖。主要适用于具有单一 URL 的单节点。
-
user-tag-access: [user-allow=<enum>] [,user-allow-list=<tag>[;<tag>...]]
user-tag-access: [user-allow=<枚举>] [,user-allow-list=<标签>[;<标签>...]] -
Privilege options for user-settable tags
用户可设置标签的权限选项-
user-allow=<existing | free | list | none> (default = free)
user-allow=<existing | free | list | none>(默认 = free) -
Controls which tags can be set or deleted on resources a user controls (such as guests). Users with the Sys.Modify privilege on / are alwaysunrestricted.
控制用户可以在其控制的资源(如虚拟机)上设置或删除哪些标签。对 / 拥有 Sys.Modify 权限的用户始终不受限制。-
none no tags are usable.
none 不可使用任何标签。 -
list tags from user-allow-list are usable.
list 只能使用用户允许列表中的标签。 -
existing like list, but already existing tags of resources are also usable.
existing 类似于 list,但资源上已存在的标签也可使用。 -
free no tag restrictions.
免费,无标签限制。
-
-
user-allow-list=<tag>[;<tag>...]
user-allow-list=<标签>[;<标签>...] -
List of tags users are allowed to set and delete (semicolon separated) for user-allow values list and existing.
允许用户设置和删除的标签列表(用分号分隔),适用于 user-allow 值列表和现有标签。
-
user-allow=<existing | free | list | none> (default = free)
-
webauthn: [allow-subdomains=<1|0>] [,id=<DOMAINNAME>] [,origin=<URL>] [,rp=<RELYING_PARTY>]
webauthn: [allow-subdomains=<1|0>] [,id=<域名>] [,origin=<URL>] [,rp=<依赖方>] -
webauthn configuration webauthn 配置
-
allow-subdomains=<boolean> (default = 1)
allow-subdomains=<boolean>(默认值 = 1) -
Whether to allow the origin to be a subdomain, rather than the exact URL.
是否允许来源为子域名,而非精确的 URL。 - id=<DOMAINNAME>
-
Relying party ID. Must be the domain name without protocol, port or location. Changing this will break existing credentials.
依赖方 ID。必须是没有协议、端口或路径的域名。更改此项将导致现有凭据失效。 - origin=<URL>
-
Site origin. Must be a https:// URL (or http://localhost). Should contain the address users type in their browsers to access the web interface. Changing this may break existing credentials.
站点来源。必须是 https:// URL(或 http://localhost)。应包含用户在浏览器中输入以访问网页界面的地址。更改此项可能会导致现有凭据失效。 - rp=<RELYING_PARTY>
-
Relying party name. Any text identifier. Changing this may break existing credentials.
依赖方名称。任何文本标识符。更改此项可能会导致现有凭据失效。
-
allow-subdomains=<boolean> (default = 1)
25. Appendix D: Calendar Events
25. 附录 D:日历事件
25.1. Schedule Format 25.1. 时间表格式
Proxmox VE has a very flexible scheduling configuration. It is based on the systemd
time calendar event format.[59]
Calendar events may be used to refer to one or more points in time in a
single expression.
Proxmox VE 具有非常灵活的调度配置。它基于 systemd 时间日历事件格式。[59] 日历事件可用于在单个表达式中引用一个或多个时间点。
Such a calendar event uses the following format:
这样的日历事件使用以下格式:
[WEEKDAY] [[YEARS-]MONTHS-DAYS] [HOURS:MINUTES[:SECONDS]]
This format allows you to configure a set of days on which the job should run.
You can also set one or more start times. It tells the replication scheduler
the moments in time when a job should start.
With this information we, can create a job which runs every workday at 10
PM: 'mon,tue,wed,thu,fri 22' which could be abbreviated to: 'mon..fri
22', most reasonable schedules can be written quite intuitive this way.
该格式允许您配置一组作业应运行的日期。您还可以设置一个或多个开始时间。它告诉复制调度器作业应启动的时间点。根据这些信息,我们可以创建一个在每个工作日晚上 10 点运行的作业:'mon,tue,wed,thu,fri 22',也可以简写为:'mon..fri 22',大多数合理的计划都可以用这种方式直观地编写。
|
|
Hours are formatted in 24-hour format. 小时采用 24 小时制格式。 |
To allow a convenient and shorter configuration, one or more repeat times per
guest can be set. They indicate that replications are done on the start-time(s)
itself and the start-time(s) plus all multiples of the repetition value. If
you want to start replication at 8 AM and repeat it every 15 minutes until
9 AM you would use: '8:00/15'
为了方便和简化配置,可以为每个客户机设置一个或多个重复时间。它们表示复制将在开始时间本身以及开始时间加上重复间隔的所有倍数时执行。如果您想在上午 8 点开始复制,并每 15 分钟重复一次直到上午 9 点,可以使用:'8:00/15'。
Here you see that if no hour separation (:), is used the value gets
interpreted as minute. If such a separation is used, the value on the left
denotes the hour(s), and the value on the right denotes the minute(s).
Further, you can use * to match all possible values.
这里你可以看到,如果没有使用小时分隔符(:),该值会被解释为分钟。如果使用了这样的分隔符,左边的值表示小时,右边的值表示分钟。此外,你可以使用 * 来匹配所有可能的值。
To get additional ideas look at
more Examples below.
要获取更多灵感,请查看下面的更多示例。
25.2. Detailed Specification
25.2. 详细说明
- weekdays 工作日
-
Days are specified with an abbreviated English version: sun, mon, tue, wed, thu, fri and sat. You may use multiple days as a comma-separated list. A range of days can also be set by specifying the start and end day separated by “..”, for example mon..fri. These formats can be mixed. If omitted '*' is assumed.
天数使用英文缩写表示:sun、mon、tue、wed、thu、fri 和 sat。您可以使用逗号分隔的多个天数列表。也可以通过指定起始和结束天数并用“..”分隔来设置天数范围,例如 mon..fri。这些格式可以混合使用。如果省略,则默认为 '*'。 - time-format
-
A time format consists of hours and minutes interval lists. Hours and minutes are separated by ':'. Both hour and minute can be list and ranges of values, using the same format as days. First are hours, then minutes. Hours can be omitted if not needed. In this case '*' is assumed for the value of hours. The valid range for values is 0-23 for hours and 0-59 for minutes.
时间格式由小时和分钟的间隔列表组成。小时和分钟之间用“:”分隔。小时和分钟都可以是列表或范围,使用与天数相同的格式。先写小时,再写分钟。如果不需要,可以省略小时,此时小时默认为 '*'。有效值范围为小时 0-23,分钟 0-59。
25.2.1. Examples: 25.2.1. 示例:
There are some special values that have a specific meaning:
有一些特殊值具有特定含义:
| Value 值 | Syntax 语法 |
|---|---|
minutely 每分钟 |
*-*-* *:*:00 |
hourly 每小时 |
*-*-* *:00:00 |
daily 每日 |
*-*-* 00:00:00 |
weekly 每周 |
mon *-*-* 00:00:00 周一 *-*-* 00:00:00 |
monthly 每月 |
*-*-01 00:00:00 |
yearly or annually 每年或年度 |
*-01-01 00:00:00 |
quarterly 每季度 |
*-01,04,07,10-01 00:00:00 |
semiannually or semi-annually |
*-01,07-01 00:00:00 |
| Schedule String 时间表字符串 | Alternative 替代方案 | Meaning 含义 |
|---|---|---|
mon,tue,wed,thu,fri 周一,周二,周三,周四,周五 |
mon..fri 周一到周五 |
Every working day at 0:00 |
sat,sun 周六,周日 |
sat..sun 周六..周日 |
Only on weekends at 0:00 |
mon,wed,fri 周一,周三,周五 |
— |
Only on Monday, Wednesday and Friday at 0:00 |
12:05 |
12:05 |
Every day at 12:05 PM |
*/5 |
0/5 |
Every five minutes 每五分钟一次 |
mon..wed 30/10 周一至周三 30/10 |
mon,tue,wed 30/10 周一、周二、周三 30/10 |
Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour |
mon..fri 8..17,22:0/15 周一至周五 8 点至 17 点,22 点 0 分起每 15 分钟一次 |
— |
Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM |
fri 12..13:5/20 周五 12..13:5/20 |
fri 12,13:5/20 周五 12,13:5/20 |
Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45 |
12,14,16,18,20,22:5 |
12/2:5 |
Every day starting at 12:05 until 22:05, every 2 hours |
* |
*/1 |
Every minute (minimum interval) |
*-05 |
— |
On the 5th day of every Month |
Sat *-1..7 15:00 周六 *-1..7 15:00 |
— |
First Saturday each Month at 15:00 |
2015-10-21 |
— |
21st October 2015 at 00:00 |
26. Appendix E: QEMU vCPU List
26. 附录 E:QEMU vCPU 列表
26.1. Introduction 26.1. 介绍
This is a list of AMD and Intel x86-64/amd64 CPU types as defined in QEMU,
going back to 2007.
这是一个 QEMU 中定义的 AMD 和 Intel x86-64/amd64 CPU 类型列表,追溯到 2007 年。
26.2. Intel CPU Types
26.2. Intel CPU 类型
-
Nahelem : 1st generation of the Intel Core processor
Nahelem:第一代 Intel Core 处理器 -
Nahelem-IBRS (v2) : add Spectre v1 protection (+spec-ctrl)
Nahelem-IBRS(v2):增加 Spectre v1 保护(+spec-ctrl) -
Westmere : 1st generation of the Intel Core processor (Xeon E7-)
Westmere:英特尔酷睿处理器第一代(Xeon E7-) -
Westmere-IBRS (v2) : add Spectre v1 protection (+spec-ctrl)
Westmere-IBRS(v2):增加 Spectre v1 保护(+spec-ctrl) -
SandyBridge : 2nd generation of the Intel Core processor
SandyBridge:英特尔酷睿处理器第二代 -
SandyBridge-IBRS (v2) : add Spectre v1 protection (+spec-ctrl)
SandyBridge-IBRS(v2):增加 Spectre v1 保护(+spec-ctrl) -
IvyBridge : 3rd generation of the Intel Core processor
IvyBridge:英特尔酷睿处理器第三代 -
IvyBridge-IBRS (v2): add Spectre v1 protection (+spec-ctrl)
IvyBridge-IBRS(v2):增加 Spectre v1 保护(+spec-ctrl) -
Haswell : 4th generation of the Intel Core processor
Haswell:英特尔酷睿处理器第四代 -
Haswell-noTSX (v2) : disable TSX (-hle, -rtm)
Haswell-noTSX(v2):禁用 TSX(-hle,-rtm) -
Haswell-IBRS (v3) : re-add TSX, add Spectre v1 protection (+hle, +rtm, +spec-ctrl)
Haswell-IBRS(v3):重新添加 TSX,增加 Spectre v1 保护(+hle,+rtm,+spec-ctrl) -
Haswell-noTSX-IBRS (v4) : disable TSX (-hle, -rtm)
Haswell-noTSX-IBRS(v4):禁用 TSX(-hle,-rtm) -
Broadwell: 5th generation of the Intel Core processor
Broadwell:英特尔酷睿处理器第五代 -
Skylake: 1st generation Xeon Scalable server processors
Skylake:第一代至强可扩展服务器处理器 -
Skylake-IBRS (v2) : add Spectre v1 protection, disable CLFLUSHOPT (+spec-ctrl, -clflushopt)
Skylake-IBRS (v2):添加 Spectre v1 保护,禁用 CLFLUSHOPT(+spec-ctrl,-clflushopt) -
Skylake-noTSX-IBRS (v3) : disable TSX (-hle, -rtm)
Skylake-noTSX-IBRS (v3):禁用 TSX(-hle,-rtm) -
Skylake-v4: add EPT switching (+vmx-eptp-switching)
Skylake-v4:添加 EPT 切换(+vmx-eptp-switching) -
Cascadelake: 2nd generation Xeon Scalable processor
Cascadelake:第二代 Xeon 可扩展处理器 -
Cascadelake-v2 : add arch_capabilities msr (+arch-capabilities, +rdctl-no, +ibrs-all, +skip-l1dfl-vmentry, +mds-no)
Cascadelake-v2:添加 arch_capabilities msr(+arch-capabilities,+rdctl-no,+ibrs-all,+skip-l1dfl-vmentry,+mds-no) -
Cascadelake-v3 : disable TSX (-hle, -rtm)
Cascadelake-v3:禁用 TSX(-hle,-rtm) -
Cascadelake-v4 : add EPT switching (+vmx-eptp-switching)
Cascadelake-v4:添加 EPT 切换(+vmx-eptp-switching) -
Cascadelake-v5 : add XSAVES (+xsaves, +vmx-xsaves)
Cascadelake-v5:添加 XSAVES(+xsaves,+vmx-xsaves) -
Cooperlake : 3rd generation Xeon Scalable processors for 4 & 8 sockets servers
Cooperlake:用于 4 和 8 插槽服务器的第三代至强可扩展处理器 -
Cooperlake-v2 : add XSAVES (+xsaves, +vmx-xsaves)
Cooperlake-v2:添加 XSAVES(+xsaves,+vmx-xsaves) -
Icelake: 3rd generation Xeon Scalable server processors
Icelake:第三代至强可扩展服务器处理器 -
Icelake-v2 : disable TSX (-hle, -rtm)
Icelake-v2:禁用 TSX(-hle,-rtm) -
Icelake-v3 : add arch_capabilities msr (+arch-capabilities, +rdctl-no, +ibrs-all, +skip-l1dfl-vmentry, +mds-no, +pschange-mc-no, +taa-no)
Icelake-v3:添加 arch_capabilities msr(+arch-capabilities,+rdctl-no,+ibrs-all,+skip-l1dfl-vmentry,+mds-no,+pschange-mc-no,+taa-no) -
Icelake-v4 : add missing flags (+sha-ni, +avx512ifma, +rdpid, +fsrm, +vmx-rdseed-exit, +vmx-pml, +vmx-eptp-switching)
Icelake-v4:添加缺失的标志(+sha-ni,+avx512ifma,+rdpid,+fsrm,+vmx-rdseed-exit,+vmx-pml,+vmx-eptp-switching) -
Icelake-v5 : add XSAVES (+xsaves, +vmx-xsaves)
Icelake-v5:添加 XSAVES(+xsaves,+vmx-xsaves) -
Icelake-v6 : add "5-level EPT" (+vmx-page-walk-5)
Icelake-v6:添加“5 级 EPT”(+vmx-page-walk-5) -
SapphireRapids : 4th generation Xeon Scalable server processors
SapphireRapids:第四代至强可扩展服务器处理器
26.3. AMD CPU Types
26.3. AMD CPU 类型
-
Opteron_G3 : K10 Opteron_G3:K10
-
Opteron_G4 : Bulldozer Opteron_G4:推土机
-
Opteron_G5 : Piledriver Opteron_G5:堆叠驱动器
-
EPYC : 1st generation of Zen processors
EPYC:第一代 Zen 处理器 -
EPYC-IBPB (v2) : add Spectre v1 protection (+ibpb)
EPYC-IBPB(v2):增加 Spectre v1 保护(+ibpb) -
EPYC-v3 : add missing flags (+perfctr-core, +clzero, +xsaveerptr, +xsaves)
EPYC-v3:添加缺失的标志(+perfctr-core,+clzero,+xsaveerptr,+xsaves) -
EPYC-Rome : 2nd generation of Zen processors
EPYC-Rome:第二代 Zen 处理器 -
EPYC-Rome-v2 : add Spectre v2, v4 protection (+ibrs, +amd-ssbd)
EPYC-Rome-v2:添加 Spectre v2、v4 防护(+ibrs,+amd-ssbd) -
EPYC-Milan : 3rd generation of Zen processors
EPYC-Milan:第三代 Zen 处理器 -
EPYC-Milan-v2 : add missing flags (+vaes, +vpclmulqdq, +stibp-always-on, +amd-psfd, +no-nested-data-bp, +lfence-always-serializing, +null-sel-clr-base)
EPYC-Milan-v2:添加缺失的标志(+vaes,+vpclmulqdq,+stibp-always-on,+amd-psfd,+no-nested-data-bp,+lfence-always-serializing,+null-sel-clr-base)
27. Appendix F: Firewall Macro Definitions
27. 附录 F:防火墙宏定义
|
Amanda
|
Amanda Backup Amanda 备份 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
10080 |
|
PARAM |
tcp |
10080 |
|
Auth 认证
|
Auth (identd) traffic 认证(identd)流量 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
113 |
|
BGP
|
Border Gateway Protocol traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
179 |
|
BitTorrent
|
BitTorrent traffic for BitTorrent 3.1 and earlier
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
6881:6889 |
|
PARAM |
udp |
6881 |
|
BitTorrent32
|
BitTorrent traffic for BitTorrent 3.2 and later
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
6881:6999 |
|
PARAM |
udp |
6881 |
|
CVS
|
Concurrent Versions System pserver traffic
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
2401 |
|
Ceph
|
Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Daemons)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
6789 |
|
PARAM |
tcp |
3300 |
|
PARAM |
tcp |
6800:7300 |
|
Citrix
|
Citrix/ICA traffic (ICA, ICA Browser, CGP)
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
1494 |
|
PARAM |
udp |
1604 |
|
PARAM |
tcp |
2598 |
|
DAAP
|
Digital Audio Access Protocol traffic (iTunes, Rythmbox daemons)
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
3689 |
|
PARAM |
udp |
3689 |
|
DCC
|
Distributed Checksum Clearinghouse spam filtering mechanism
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
6277 |
|
DHCPfwd DHCP 转发
|
Forwarded DHCP traffic 转发的 DHCP 流量 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
67:68 |
67:68 |
|
DHCPv6
|
DHCPv6 traffic DHCPv6 流量 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
udp |
546:547 |
546:547 |
|
DNS
|
Domain Name System traffic (upd and tcp)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp 用户数据报协议 |
53 |
|
PARAM |
tcp |
53 |
|
Distcc
|
Distributed Compiler service
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
3632 |
|
FTP
|
File Transfer Protocol 文件传输协议 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
21 |
|
Finger
|
Finger protocol (RFC 742)
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
79 |
|
GNUnet
|
GNUnet secure peer-to-peer networking traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
2086 |
|
PARAM |
udp |
2086 |
|
PARAM |
tcp |
1080 |
|
PARAM |
udp |
1080 |
|
GRE
|
Generic Routing Encapsulation tunneling protocol
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
47 |
|
Git
|
Git distributed revision control traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
9418 |
|
HKP
|
OpenPGP HTTP key server protocol traffic
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
11371 |
|
HTTP
|
Hypertext Transfer Protocol (WWW)
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
80 |
|
HTTPS
|
Hypertext Transfer Protocol (WWW) over SSL
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
443 |
|
ICPV2
|
Internet Cache Protocol V2 (Squid) traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
3130 |
|
ICQ
|
AOL Instant Messenger traffic
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
5190 |
|
IMAP
|
Internet Message Access Protocol
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
143 |
|
IMAPS
|
Internet Message Access Protocol over SSL
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
993 |
|
IPIP
|
IPIP capsulation traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
94 |
|
IPsec
|
IPsec traffic IPsec 流量 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
500 |
500 |
PARAM |
50 |
|
IPsecah
|
IPsec authentication (AH) traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp 用户数据报协议 |
500 |
500 |
PARAM |
51 |
|
IPsecnat
|
IPsec traffic and Nat-Traversal
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
500 |
|
PARAM |
udp |
4500 |
|
PARAM |
50 |
|
IRC
|
Internet Relay Chat traffic
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
6667 |
|
Jetdirect
|
HP Jetdirect printing HP Jetdirect 打印 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
9100 |
|
L2TP
|
Layer 2 Tunneling Protocol traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
1701 |
|
LDAP
|
Lightweight Directory Access Protocol traffic
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
389 |
|
LDAPS
|
Secure Lightweight Directory Access Protocol traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
636 |
|
MDNS
|
Multicast DNS 多播 DNS |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
5353 |
|
MSNP
|
Microsoft Notification Protocol
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
1863 |
|
MSSQL
|
Microsoft SQL Server |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
1433 |
|
Mail 邮件
|
Mail traffic (SMTP, SMTPS, Submission)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
25 |
|
PARAM |
tcp |
465 |
|
PARAM |
tcp |
587 |
|
Munin
|
Munin networked resource monitoring traffic
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
4949 |
|
MySQL
|
MySQL server MySQL 服务器 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
3306 |
|
NNTP
|
NNTP traffic (Usenet). NNTP 流量(Usenet)。 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
119 |
|
NNTPS
|
Encrypted NNTP traffic (Usenet)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
563 |
|
NTP
|
Network Time Protocol (ntpd)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
udp |
123 |
|
NeighborDiscovery 邻居发现
|
IPv6 neighbor solicitation, neighbor and router advertisement
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
icmpv6 |
router-solicitation 路由器请求 |
|
PARAM |
icmpv6 |
router-advertisement 路由器通告 |
|
PARAM |
icmpv6 |
neighbor-solicitation 邻居请求 |
|
PARAM |
icmpv6 |
neighbor-advertisement 邻居通告 |
|
OSPF
|
OSPF multicast traffic OSPF 多播流量 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
89 |
|
OpenVPN
|
OpenVPN traffic OpenVPN 流量 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp 用户数据报协议 |
1194 |
|
PCA
|
Symantec PCAnywere (tm) |
| Action 动作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
udp |
5632 |
|
PARAM |
tcp |
5631 |
|
PMG
|
Proxmox Mail Gateway web interface
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
8006 |
|
POP3
|
POP3 traffic POP3 流量 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
110 |
|
POP3S
|
Encrypted POP3 traffic 加密的 POP3 流量 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
995 |
|
PPtP
|
Point-to-Point Tunneling Protocol
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
47 |
||
PARAM |
tcp |
1723 |
|
Ping
|
ICMP echo request ICMP 回显请求 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
icmp |
echo-request 回显请求 |
|
PostgreSQL
|
PostgreSQL server PostgreSQL 服务器 |
| Action 操作 | proto | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
5432 |
|
Printer 打印机
|
Line Printer protocol printing
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
515 |
|
RDP
|
Microsoft Remote Desktop Protocol traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
3389 |
|
RIP
|
Routing Information Protocol (bidirectional)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
udp |
520 |
|
RNDC
|
BIND remote management protocol
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
953 |
|
Razor
|
Razor Antispam System Razor 反垃圾邮件系统 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
2703 |
|
Rdate 接收日期
|
Remote time retrieval (rdate)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
37 |
|
Rsync
|
Rsync server Rsync 服务器 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
873 |
|
SANE
|
SANE network scanning SANE 网络扫描 |
| Action 操作 | proto | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
6566 |
|
SMB
|
Microsoft SMB traffic Microsoft SMB 流量 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
135,445 |
|
PARAM |
udp |
137:139 |
|
PARAM |
udp |
1024:65535 |
137 |
PARAM |
tcp |
135,139,445 |
|
SMBswat
|
Samba Web Administration Tool
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
901 |
|
SMTP
|
Simple Mail Transfer Protocol
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
25 |
|
SMTPS
|
Encrypted Simple Mail Transfer Protocol
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
465 |
|
SNMP
|
Simple Network Management Protocol
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
161:162 |
|
PARAM |
tcp |
161 |
|
SPAMD
|
Spam Assassin SPAMD traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
783 |
|
SPICEproxy SPICE 代理
|
Proxmox VE SPICE display proxy traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
3128 |
|
SSH
|
Secure shell traffic 安全 Shell 流量 |
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
22 |
|
SVN
|
Subversion server (svnserve)
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
3690 |
|
SixXS
|
SixXS IPv6 Deployment and Tunnel Broker
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
3874 |
|
PARAM |
udp |
3740 |
|
PARAM |
41 |
||
PARAM |
udp |
5072,8374 |
|
Squid
|
Squid web proxy traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
3128 |
|
Submission 提交
|
Mail message submission traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
587 |
|
Syslog
|
Syslog protocol (RFC 5424) traffic
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp |
514 |
|
PARAM |
tcp |
514 |
|
TFTP
|
Trivial File Transfer Protocol traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp 用户数据报协议 |
69 |
|
Telnet
|
Telnet traffic Telnet 流量 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
23 |
|
Telnets Telnet 服务
|
Telnet over SSL 通过 SSL 的 Telnet |
| Action 操作 | proto 协议 | dport 目标端口 | sport 端口号 |
|---|---|---|---|
PARAM |
tcp |
992 |
|
Time 时间
|
RFC 868 Time protocol
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
37 |
|
Trcrt
|
Traceroute (for up to 30 hops) traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
udp 用户数据报协议 |
33434:33524 |
|
PARAM |
icmp |
echo-request 回显请求 |
|
VNC
|
VNC traffic for VNC display’s 0 - 99
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
5900:5999 |
|
VNCL
|
VNC traffic from Vncservers to Vncviewers in listen mode
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 运动 |
|---|---|---|---|
PARAM |
tcp |
5500 |
|
Web 网络
|
WWW traffic (HTTP and HTTPS)
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
80 |
|
PARAM |
tcp |
443 |
|
Webcache 网页缓存
|
Web Cache/Proxy traffic (port 8080)
|
| Action 动作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
8080 |
|
Webmin
|
Webmin traffic Webmin 流量 |
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp |
10000 |
|
Whois
|
Whois (nicname, RFC 3912) traffic
|
| Action 操作 | proto 协议 | dport 目标端口 | sport 源端口 |
|---|---|---|---|
PARAM |
tcp 传输控制协议 |
43 |
28. Appendix G: Markdown Primer
28. 附录 G:Markdown 入门指南
Markdown is a text-to-HTML conversion tool for web writers. Markdown allows you
to write using an easy-to-read, easy-to-write plain text format, then convert
it to structurally valid XHTML (or HTML).
Markdown 是一种面向网页作者的文本转 HTML 转换工具。Markdown 允许你使用一种易读易写的纯文本格式编写内容,然后将其转换为结构有效的 XHTML(或 HTML)。
— John Gruber
— 约翰·格鲁伯
The Proxmox VE web interface has support for using Markdown to rendering rich text
formatting in node and virtual guest notes.
Proxmox VE 的网页界面支持在节点和虚拟客户备注中使用 Markdown 来渲染富文本格式。
Proxmox VE supports CommonMark with most extensions of GFM (GitHub Flavoured Markdown),
like tables or task-lists.
Proxmox VE 支持 CommonMark 以及大多数 GFM(GitHub 风格的 Markdown)扩展,如表格或任务列表。
28.1. Markdown Basics 28.1. Markdown 基础
Note that we only describe the basics here, please search the web for more
extensive resources, for example on https://www.markdownguide.org/
请注意,我们这里只描述基础内容,更多详细资源请自行在网上搜索,例如访问 https://www.markdownguide.org/
28.1.1. Headings 28.1.1. 标题
# This is a Heading h1 ## This is a Heading h2 ##### This is a Heading h5
28.1.2. Emphasis 28.1.2. 强调
Use *text* or _text_ for emphasis.
使用 *text* 或 _text_ 来表示强调。
Use **text** or __text__ for bold, heavy-weight text.
使用 **text** 或 __text__ 来表示加粗、加重的文本。
Combinations are also possible, for example:
也可以组合使用,例如:
_You **can** combine them_
28.1.3. Links 28.1.3. 链接
You can use automatic detection of links, for example,
https://forum.proxmox.com/ would transform it into a clickable link.
您可以使用自动检测链接,例如,https://forum.proxmox.com/ 会将其转换为可点击的链接。
You can also control the link text, for example:
您也可以控制链接文本,例如:
Now, [the part in brackets will be the link text](https://forum.proxmox.com/).
28.1.4. Lists 28.1.4. 列表
Unordered Lists 无序列表
Use * or - for unordered lists, for example:
使用 * 或 - 来表示无序列表,例如:
* Item 1 * Item 2 * Item 2a * Item 2b
Adding an indentation can be used to created nested lists.
添加缩进可以用来创建嵌套列表。
28.1.5. Tables 28.1.5. 表格
Tables use the pipe symbol | to separate columns, and - to separate the
table header from the table body, in that separation one can also set the text
alignment, making one column left-, center-, or right-aligned.
表格使用管道符号 | 来分隔列,使用 - 来分隔表头和表体,在该分隔线上还可以设置文本对齐方式,使某一列左对齐、居中或右对齐。
| Left columns | Right columns | Some | More | Cols.| Centering Works Too | ------------- |--------------:|--------|------|------|:------------------:| | left foo | right foo | First | Row | Here | >center< | | left bar | right bar | Second | Row | Here | 12345 | | left baz | right baz | Third | Row | Here | Test | | left zab | right zab | Fourth | Row | Here | ☁️☁️☁️ | | left rab | right rab | And | Last | Here | The End |
Note that you do not need to align the columns nicely with white space, but that makes
editing tables easier.
请注意,您不需要用空格将列对齐整齐,但这样做会使编辑表格更容易。
28.1.6. Block Quotes 28.1.6. 块引用
You can enter block quotes by prefixing a line with >, similar as in plain-text emails.
您可以通过在行首添加 > 来输入块引用,类似于纯文本电子邮件中的用法。
> Markdown is a lightweight markup language with plain-text-formatting syntax, > created in 2004 by John Gruber with Aaron Swartz. > >> Markdown is often used to format readme files, for writing messages in online discussion forums, >> and to create rich text using a plain text editor.
28.1.7. Code and Snippets
28.1.7. 代码和代码片段
You can use backticks to avoid processing for a few word or paragraphs. That is useful for
avoiding that a code or configuration hunk gets mistakenly interpreted as markdown.
你可以使用反引号来避免对几个单词或段落进行处理。这对于防止代码或配置块被错误地解释为 Markdown 非常有用。
Inline code 行内代码
Surrounding part of a line with single backticks allows to write code inline,
for examples:
用单个反引号包围一行中的部分内容,可以实现行内代码书写,例如:
This hosts IP address is `10.0.0.1`.
Whole blocks of code
整段代码
For code blocks spanning several lines you can use triple-backticks to start
and end such a block, for example:
对于跨多行的代码块,您可以使用三重反引号来开始和结束该代码块,例如:
```
# This is the network config I want to remember here
auto vmbr2
iface vmbr2 inet static
address 10.0.0.1/24
bridge-ports ens20
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
```
29. Appendix H: GNU Free Documentation License
29. 附录 H:GNU 自由文档许可证
Version 1.3, 3 November 2008
版本 1.3,2008 年 11 月 3 日
Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
<http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other
functional and useful document "free" in the sense of freedom: to
assure everyone the effective freedom to copy and redistribute it,
with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way
to get credit for their work, while not being considered responsible
for modifications made by others.
本许可证的目的是使手册、教科书或其他功能性且有用的文档在自由的意义上“自由”:确保每个人都拥有有效的自由,可以复制和再分发该文档,无论是否修改,且无论是商业还是非商业用途。其次,本许可证为作者和出版者保留了一种获得其作品认可的方式,同时不对他人所做的修改承担责任。
This License is a kind of "copyleft", which means that derivative
works of the document must themselves be free in the same sense. It
complements the GNU General Public License, which is a copyleft
license designed for free software.
本许可证是一种“版权左转”(copyleft),意味着文档的衍生作品必须以相同的自由方式保持自由。它是 GNU 通用公共许可证的补充,后者是一种为自由软件设计的版权左转许可证。
We have designed this License in order to use it for manuals for free
software, because free software needs free documentation: a free
program should come with manuals providing the same freedoms that the
software does. But this License is not limited to software manuals;
it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book. We recommend this License
principally for works whose purpose is instruction or reference.
我们设计本许可证是为了用于自由软件的手册,因为自由软件需要自由的文档:一个自由程序应附带提供与软件相同自由的手册。但本许可证不限于软件手册;它可用于任何文本作品,无论主题为何,或是否以印刷书籍形式出版。我们主要推荐本许可证用于以教学或参考为目的的作品。
1. 适用范围与定义
This License applies to any manual or other work, in any medium, that
contains a notice placed by the copyright holder saying it can be
distributed under the terms of this License. Such a notice grants a
world-wide, royalty-free license, unlimited in duration, to use that
work under the conditions stated herein. The "Document", below,
refers to any such manual or work. Any member of the public is a
licensee, and is addressed as "you". You accept the license if you
copy, modify or distribute the work in a way requiring permission
under copyright law.
本许可证适用于任何包含版权持有人声明可根据本许可证条款分发的通知的手册或其他作品,无论其媒介为何。此类通知授予全球范围内、免版税、无限期的许可,允许在本文所述条件下使用该作品。下文中的“文档”指任何此类手册或作品。任何公众成员均为被许可人,称为“您”。如果您以版权法要求许可的方式复制、修改或分发该作品,即表示您接受本许可证。
A "Modified Version" of the Document means any work containing the
Document or a portion of it, either copied verbatim, or with
modifications and/or translated into another language.
“修改版本”的文档是指包含文档全部或部分内容的任何作品,无论是逐字复制,还是经过修改和/或翻译成另一种语言。
A "Secondary Section" is a named appendix or a front-matter section of
the Document that deals exclusively with the relationship of the
publishers or authors of the Document to the Document’s overall
subject (or to related matters) and contains nothing that could fall
directly within that overall subject. (Thus, if the Document is in
part a textbook of mathematics, a Secondary Section may not explain
any mathematics.) The relationship could be a matter of historical
connection with the subject or with related matters, or of legal,
commercial, philosophical, ethical or political position regarding
them.
“次要章节”是指文档中一个具名的附录或前言部分,专门处理文档的发布者或作者与文档整体主题(或相关事项)之间的关系,且不包含任何可能直接属于该整体主题的内容。(因此,如果文档部分内容是数学教科书,次要章节不得解释任何数学内容。)这种关系可以是与主题或相关事项的历史联系,也可以是关于它们的法律、商业、哲学、伦理或政治立场。
The "Invariant Sections" are certain Secondary Sections whose titles
are designated, as being those of Invariant Sections, in the notice
that says that the Document is released under this License. If a
section does not fit the above definition of Secondary then it is not
allowed to be designated as Invariant. The Document may contain zero
Invariant Sections. If the Document does not identify any Invariant
Sections then there are none.
“不变章节”是指某些次要章节,其标题在声明文档根据本许可证发布的通知中被指定为不变章节的标题。如果某章节不符合上述次要章节的定义,则不允许将其指定为不变章节。文档中可以没有不变章节。如果文档未标明任何不变章节,则表示没有不变章节。
The "Cover Texts" are certain short passages of text that are listed,
as Front-Cover Texts or Back-Cover Texts, in the notice that says that
the Document is released under this License. A Front-Cover Text may
be at most 5 words, and a Back-Cover Text may be at most 25 words.
“封面文字”是指在声明该文档根据本许可证发布的通知中,作为封面文字或封底文字列出的某些简短文字段落。封面文字最多可包含 5 个单词,封底文字最多可包含 25 个单词。
A "Transparent" copy of the Document means a machine-readable copy,
represented in a format whose specification is available to the
general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of
pixels) generic paint programs or (for drawings) some widely available
drawing editor, and that is suitable for input to text formatters or
for automatic translation to a variety of formats suitable for input
to text formatters. A copy made in an otherwise Transparent file
format whose markup, or absence of markup, has been arranged to thwart
or discourage subsequent modification by readers is not Transparent.
An image format is not Transparent if used for any substantial amount
of text. A copy that is not "Transparent" is called "Opaque".
“透明”副本指的是机器可读的副本,其格式规范对公众开放,适合使用通用文本编辑器直接修改文档,或(对于由像素组成的图像)使用通用绘图程序,或(对于图形)使用某些广泛可用的绘图编辑器,并且适合输入到文本格式化程序或自动转换为适合输入到文本格式化程序的多种格式。以其他透明文件格式制作的副本,如果其标记或缺少标记的方式被安排用来阻止或阻碍读者后续修改,则不属于透明副本。如果图像格式用于大量文本,则该图像格式不属于透明格式。非“透明”副本称为“不透明”副本。
Examples of suitable formats for Transparent copies include plain
ASCII without markup, Texinfo input format, LaTeX input format, SGML
or XML using a publicly available DTD, and standard-conforming simple
HTML, PostScript or PDF designed for human modification. Examples of
transparent image formats include PNG, XCF and JPG. Opaque formats
include proprietary formats that can be read and edited only by
proprietary word processors, SGML or XML for which the DTD and/or
processing tools are not generally available, and the
machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.
适合透明副本的格式示例包括无标记的纯 ASCII、Texinfo 输入格式、LaTeX 输入格式、使用公开 DTD 的 SGML 或 XML,以及符合标准的简单 HTML、为人工修改设计的 PostScript 或 PDF。透明图像格式的示例包括 PNG、XCF 和 JPG。不透明格式包括只能由专有文字处理器读取和编辑的专有格式、DTD 和/或处理工具通常不可用的 SGML 或 XML,以及某些文字处理器仅为输出目的生成的机器生成的 HTML、PostScript 或 PDF。
The "Title Page" means, for a printed book, the title page itself,
plus such following pages as are needed to hold, legibly, the material
this License requires to appear in the title page. For works in
formats which do not have any title page as such, "Title Page" means
the text near the most prominent appearance of the work’s title,
preceding the beginning of the body of the text.
“标题页”指的是印刷书籍中的标题页本身,以及为清晰显示本许可证要求出现在标题页上的材料而需要的后续页面。对于没有正式标题页格式的作品,“标题页”指的是作品标题最显著出现位置附近的文本,位于正文开始之前。
The "publisher" means any person or entity that distributes copies of
the Document to the public.
“出版者”指任何向公众分发文档副本的个人或实体。
A section "Entitled XYZ" means a named subunit of the Document whose
title either is precisely XYZ or contains XYZ in parentheses following
text that translates XYZ in another language. (Here XYZ stands for a
specific section name mentioned below, such as "Acknowledgements",
"Dedications", "Endorsements", or "History".) To "Preserve the Title"
of such a section when you modify the Document means that it remains a
section "Entitled XYZ" according to this definition.
“标题为 XYZ”的章节是指文档中一个命名的子单元,其标题要么正是 XYZ,要么在标题中包含 XYZ,且 XYZ 位于括号内,括号前的文字是 XYZ 的另一种语言翻译。(这里的 XYZ 代表下面提到的特定章节名称,如“致谢”、“献词”、“认可”或“历史”。)当你修改文档时,“保留该章节标题”意味着该章节仍然是根据此定义的“标题为 XYZ”的章节。
The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document. These Warranty
Disclaimers are considered to be included by reference in this
License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has
no effect on the meaning of this License.
文档中可能包含在声明本许可证适用于该文档的通知旁边的免责声明。这些免责声明被视为通过引用包含在本许可证中,但仅限于免责声明的范围:这些免责声明可能产生的任何其他含义均无效,对本许可证的含义没有影响。
You may copy and distribute the Document in any medium, either
commercially or noncommercially, provided that this License, the
copyright notices, and the license notice saying this License applies
to the Document are reproduced in all copies, and that you add no
other conditions whatsoever to those of this License. You may not use
technical measures to obstruct or control the reading or further
copying of the copies you make or distribute. However, you may accept
compensation in exchange for copies. If you distribute a large enough
number of copies you must also follow the conditions in section 3.
您可以以任何媒介复制和分发本文件,无论是商业还是非商业用途,前提是本许可证、版权声明以及声明本许可证适用于本文件的许可声明必须在所有副本中被复制,并且您不得对本许可证的条件添加任何其他条件。您不得使用技术手段阻碍或控制您制作或分发的副本的阅读或进一步复制。然而,您可以因副本收取报酬。如果您分发的副本数量足够多,您还必须遵守第 3 节中的条件。
You may also lend copies, under the same conditions stated above, and
you may publicly display copies.
您也可以在上述相同条件下借出副本,并且可以公开展示副本。
If you publish printed copies (or copies in media that commonly have
printed covers) of the Document, numbering more than 100, and the
Document’s license notice requires Cover Texts, you must enclose the
copies in covers that carry, clearly and legibly, all these Cover
Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
the back cover. Both covers must also clearly and legibly identify
you as the publisher of these copies. The front cover must present
the full title with all words of the title equally prominent and
visible. You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve
the title of the Document and satisfy these conditions, can be treated
as verbatim copying in other respects.
如果您发布的文档印刷副本(或通常带有印刷封面的媒介副本)数量超过 100 份,并且文档的许可声明要求封面文字,您必须将副本装订在封面上,封面上清晰且易读地包含所有这些封面文字:前封面文字应在封面正面,后封面文字应在封面背面。两个封面还必须清晰且易读地标明您是这些副本的出版者。封面正面必须完整呈现标题,标题中的所有词语应同等突出且可见。您可以在封面上添加其他材料。仅对封面进行更改的复制,只要保留文档标题并满足这些条件,在其他方面可视为逐字复制。
If the required texts for either cover are too voluminous to fit
legibly, you should put the first ones listed (as many as fit
reasonably) on the actual cover, and continue the rest onto adjacent
pages.
如果任一封面所需的文字过多,无法清晰容纳,您应将列表中最先列出的文字(合理范围内尽可能多)放在实际封面上,其余文字则延续至相邻页面。
If you publish or distribute Opaque copies of the Document numbering
more than 100, you must either include a machine-readable Transparent
copy along with each Opaque copy, or state in or with each Opaque copy
a computer-network location from which the general network-using
public has access to download using public-standard network protocols
a complete Transparent copy of the Document, free of added material.
If you use the latter option, you must take reasonably prudent steps,
when you begin distribution of Opaque copies in quantity, to ensure
that this Transparent copy will remain thus accessible at the stated
location until at least one year after the last time you distribute an
Opaque copy (directly or through your agents or retailers) of that
edition to the public.
如果您发布或分发超过 100 份不透明副本,您必须随每份不透明副本一并提供一份机器可读的透明副本,或者在每份不透明副本中或随附说明一个计算机网络位置,供公众通过公共标准网络协议访问并免费下载该文档的完整透明副本,且不得附加任何额外材料。如果您选择后者,您必须在开始大量分发不透明副本时采取合理谨慎的措施,确保该透明副本在所述位置保持可访问状态,至少持续到您最后一次向公众(直接或通过您的代理商或零售商)分发该版本不透明副本后一年。
It is requested, but not required, that you contact the authors of the
Document well before redistributing any large number of copies, to
give them a chance to provide you with an updated version of the
Document.
建议您在重新分发大量副本之前,提前联系文档作者,以便他们有机会向您提供文档的更新版本,但这并非强制要求。
You may copy and distribute a Modified Version of the Document under
the conditions of sections 2 and 3 above, provided that you release
the Modified Version under precisely this License, with the Modified
Version filling the role of the Document, thus licensing distribution
and modification of the Modified Version to whoever possesses a copy
of it. In addition, you must do these things in the Modified Version:
您可以在上述第 2 和第 3 节的条件下复制和分发文档的修改版本,前提是您必须以本许可证的形式发布该修改版本,使修改版本充当文档的角色,从而授权持有其副本的任何人分发和修改该修改版本。此外,您必须在修改版本中执行以下操作:
-
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
在标题页(以及封面,如有)使用与文档标题不同的标题,并且与之前版本的标题不同(如果有,应在文档的历史部分列出)。如果之前版本的原始发布者允许,您可以使用与之前版本相同的标题。 -
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
在标题页列出作为作者的一个或多个对修改版本的修改负有责任的人或实体,以及文档的至少五位主要作者(如果少于五位,则列出全部主要作者),除非他们免除您此项要求。 -
State on the Title page the name of the publisher of the Modified Version, as the publisher.
在标题页声明修改版本的发布者名称,作为发布者。 -
Preserve all the copyright notices of the Document.
保留文档中的所有版权声明。 -
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
在其他版权声明旁边添加适当的版权声明,说明您的修改。 -
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
在版权声明之后立即包含一条许可声明,授予公众根据本许可条款使用修改版本的权限,格式如下面附录所示。 -
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.
在该许可声明中保留文档许可声明中给出的不变部分和必需封面文本的完整列表。 -
Include an unaltered copy of this License.
包含本许可证的未修改副本。 -
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
保留标题为“历史”的章节,保留其标题,并在其中添加一项,至少说明修改版本的标题、年份、新作者和出版者,如封面页所示。如果文档中没有标题为“历史”的章节,则创建一个,说明文档封面页所示的标题、年份、作者和出版者,然后添加一项,描述修改版本,如前述句子所述。 -
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
保留文档中为公开访问文档的透明副本所提供的网络位置(如果有),以及文档中为其所基于的先前版本所提供的网络位置。这些可以放在“历史”章节中。对于发布于文档本身至少四年前的作品,或者如果其所指版本的原始出版者给予许可,可以省略网络位置。 -
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
对于任何标题为“致谢”或“献词”的章节,保留章节标题,并保留该章节中所有贡献者致谢和/或献词的内容和语气。 -
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
保留文档中所有不变节的内容,文本和标题均不得更改。不变节的章节编号或等同内容不视为章节标题的一部分。 -
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
删除任何标题为“认可”的章节。修改版本中不得包含此类章节。 -
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
不得将任何现有章节重新命名为“认可”,也不得将其标题与任何不变节的标题冲突。 -
Preserve any Warranty Disclaimers.
保留所有免责声明。
If the Modified Version includes new front-matter sections or
appendices that qualify as Secondary Sections and contain no material
copied from the Document, you may at your option designate some or all
of these sections as invariant. To do this, add their titles to the
list of Invariant Sections in the Modified Version’s license notice.
These titles must be distinct from any other section titles.
如果修改版本包含新的前言部分或附录,这些部分符合次要章节的定义且不包含从文档中复制的内容,您可以选择将其中部分或全部章节指定为不变章节。为此,请将这些章节标题添加到修改版本许可声明中的不变章节列表中。这些标题必须与其他章节标题不同。
You may add a section Entitled "Endorsements", provided it contains
nothing but endorsements of your Modified Version by various
parties—for example, statements of peer review or that the text has
been approved by an organization as the authoritative definition of a
standard.
您可以添加一个名为“认可”的章节,前提是该章节仅包含对您的修改版本的认可内容——例如,同行评审声明或某组织已批准该文本作为标准权威定义的声明。
You may add a passage of up to five words as a Front-Cover Text, and a
passage of up to 25 words as a Back-Cover Text, to the end of the list
of Cover Texts in the Modified Version. Only one passage of
Front-Cover Text and one of Back-Cover Text may be added by (or
through arrangements made by) any one entity. If the Document already
includes a cover text for the same cover, previously added by you or
by arrangement made by the same entity you are acting on behalf of,
you may not add another; but you may replace the old one, on explicit
permission from the previous publisher that added the old one.
您可以在修改版本的封面文本列表末尾添加一段最多五个单词的封面正面文本,以及一段最多二十五个单词的封面背面文本。任何一个实体(或通过该实体安排)只能添加一段封面正面文本和一段封面背面文本。如果文档已经包含由您或您所代表的同一实体通过安排之前添加的相同封面的封面文本,则您不得再添加另一段;但您可以在获得之前添加该文本的出版者明确许可的情况下替换旧文本。
The author(s) and publisher(s) of the Document do not by this License
give permission to use their names for publicity for or to assert or
imply endorsement of any Modified Version.
文档的作者和出版者并未通过本许可证授权使用他们的名字进行宣传,或声明或暗示对任何修改版本的认可。
You may combine the Document with other documents released under this
License, under the terms defined in section 4 above for modified
versions, provided that you include in the combination all of the
Invariant Sections of all of the original documents, unmodified, and
list them all as Invariant Sections of your combined work in its
license notice, and that you preserve all their Warranty Disclaimers.
您可以根据上述第 4 节中定义的修改版本条款,将本文件与根据本许可证发布的其他文件合并,前提是您在合并中包含所有原始文件的所有不变部分,且不对其进行修改,并在合并作品的许可证声明中将它们全部列为不变部分,同时保留它们所有的免责声明。
The combined work need only contain one copy of this License, and
multiple identical Invariant Sections may be replaced with a single
copy. If there are multiple Invariant Sections with the same name but
different contents, make the title of each such section unique by
adding at the end of it, in parentheses, the name of the original
author or publisher of that section if known, or else a unique number.
Make the same adjustment to the section titles in the list of
Invariant Sections in the license notice of the combined work.
合并作品只需包含本许可证的一份副本,多个相同的不变部分可以合并为一份副本。如果存在多个名称相同但内容不同的不变部分,请通过在标题末尾加上括号内的该部分原作者或发布者的名称(如果已知),或加上唯一编号,使每个此类部分的标题唯一。在合并作品的许可证声明中不变部分列表中也应做同样的标题调整。
In the combination, you must combine any sections Entitled "History"
in the various original documents, forming one section Entitled
"History"; likewise combine any sections Entitled "Acknowledgements",
and any sections Entitled "Dedications". You must delete all sections
Entitled "Endorsements".
在合并时,必须将各个原始文档中标题为“历史”的所有章节合并为一个标题为“历史”的章节;同样,将所有标题为“致谢”的章节合并,以及所有标题为“献词”的章节合并。必须删除所有标题为“认可”的章节。
6. 文档集合
You may make a collection consisting of the Document and other
documents released under this License, and replace the individual
copies of this License in the various documents with a single copy
that is included in the collection, provided that you follow the rules
of this License for verbatim copying of each of the documents in all
other respects.
您可以制作一个由本文件和其他根据本许可证发布的文档组成的集合,并用包含在集合中的单一许可证副本替换各文档中单独的许可证副本,前提是您在其他所有方面遵守本许可证关于逐字复制各文档的规则。
You may extract a single document from such a collection, and
distribute it individually under this License, provided you insert a
copy of this License into the extracted document, and follow this
License in all other respects regarding verbatim copying of that
document.
您可以从此类集合中提取单个文档,并根据本许可证单独分发,前提是您在提取的文档中插入本许可证的副本,并在其他所有方面遵守本许可证关于逐字复制该文档的规定。
7. 与独立作品的汇编
A compilation of the Document or its derivatives with other separate
and independent documents or works, in or on a volume of a storage or
distribution medium, is called an "aggregate" if the copyright
resulting from the compilation is not used to limit the legal rights
of the compilation’s users beyond what the individual works permit.
When the Document is included in an aggregate, this License does not
apply to the other works in the aggregate which are not themselves
derivative works of the Document.
将本文件或其衍生作品与其他独立且分离的文件或作品汇编在存储或分发介质的一个卷中,称为“汇编”,前提是该汇编所产生的版权不会限制汇编用户的合法权利,超出各个作品本身所允许的范围。当本文件被包含在汇编中时,本许可证不适用于汇编中那些本身不是本文件衍生作品的其他作品。
If the Cover Text requirement of section 3 is applicable to these
copies of the Document, then if the Document is less than one half of
the entire aggregate, the Document’s Cover Texts may be placed on
covers that bracket the Document within the aggregate, or the
electronic equivalent of covers if the Document is in electronic form.
Otherwise they must appear on printed covers that bracket the whole
aggregate.
如果第 3 节的封面文字要求适用于这些本文件的副本,那么如果本文件在整个汇编中所占比例不足一半,本文件的封面文字可以放置在汇编中本文件部分的封面上,或者如果本文件为电子形式,则放置在电子封面上。否则,封面文字必须出现在覆盖整个汇编的印刷封面上。
Translation is considered a kind of modification, so you may
distribute translations of the Document under the terms of section 4.
Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include
translations of some or all Invariant Sections in addition to the
original versions of these Invariant Sections. You may include a
translation of this License, and all the license notices in the
Document, and any Warranty Disclaimers, provided that you also include
the original English version of this License and the original versions
of those notices and disclaimers. In case of a disagreement between
the translation and the original version of this License or a notice
or disclaimer, the original version will prevail.
翻译被视为一种修改,因此您可以根据第 4 节的条款分发文档的翻译版本。用翻译替换不变部分需要版权持有人的特别许可,但您可以在包含这些不变部分原始版本的基础上,附加部分或全部不变部分的翻译。您可以包含本许可证的翻译版本,以及文档中的所有许可证通知和任何免责声明,前提是您同时包含本许可证的英文原版以及这些通知和免责声明的原始版本。如果翻译版本与本许可证或某个通知或免责声明的原始版本之间存在分歧,以原始版本为准。
If a section in the Document is Entitled "Acknowledgements",
"Dedications", or "History", the requirement (section 4) to Preserve
its Title (section 1) will typically require changing the actual
title.
如果文档中的某一章节标题为“致谢”、“献词”或“历史”,则保留其标题的要求(第 4 节)通常需要更改实际标题。
You may not copy, modify, sublicense, or distribute the Document
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense, or distribute it is void, and
will automatically terminate your rights under this License.
除非本许可明确允许,否则您不得复制、修改、再授权或分发本文件。任何试图复制、修改、再授权或分发的行为均无效,并将自动终止您根据本许可享有的权利。
However, if you cease all violation of this License, then your license
from a particular copyright holder is reinstated (a) provisionally,
unless and until the copyright holder explicitly and finally
terminates your license, and (b) permanently, if the copyright holder
fails to notify you of the violation by some reasonable means prior to
60 days after the cessation.
但是,如果您停止了所有违反本许可的行为,则您从特定版权持有者处获得的许可将被恢复:(a) 临时恢复,除非且直到版权持有者明确且最终终止您的许可;(b) 永久恢复,如果版权持有者未能在停止行为后 60 天内通过合理方式通知您该违规行为。
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
此外,如果版权持有者通过合理方式通知您违规行为,且这是您首次收到该版权持有者关于本许可(任何作品)违规的通知,并且您在收到通知后 30 天内纠正了违规行为,则您从该版权持有者处获得的许可将被永久恢复。
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, receipt of a copy of some or all of the same material does
not give you any rights to use it.
根据本条款终止您的权利并不终止根据本许可证从您处获得副本或权利的各方的许可。如果您的权利已被终止且未被永久恢复,收到部分或全部相同材料的副本并不赋予您任何使用权。
10. 本许可证的未来修订
The Free Software Foundation may publish new, revised versions of the
GNU Free Documentation License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in
detail to address new problems or concerns. See
http://www.gnu.org/copyleft/.
自由软件基金会可能会不时发布 GNU 自由文档许可证的新修订版本。此类新版本将在精神上与当前版本相似,但可能在细节上有所不同,以解决新的问题或关注点。详见 http://www.gnu.org/copyleft/。
Each version of the License is given a distinguishing version number.
If the Document specifies that a particular numbered version of this
License "or any later version" applies to it, you have the option of
following the terms and conditions either of that specified version or
of any later version that has been published (not as a draft) by the
Free Software Foundation. If the Document does not specify a version
number of this License, you may choose any version ever published (not
as a draft) by the Free Software Foundation. If the Document
specifies that a proxy can decide which future versions of this
License can be used, that proxy’s public statement of acceptance of a
version permanently authorizes you to choose that version for the
Document.
每个许可证版本都有一个独特的版本号。如果文档指定了该许可证的某个特定编号版本“或任何更高版本”适用于该文档,则您可以选择遵守该指定版本或自由软件基金会已发布(非草案)的任何更高版本的条款和条件。如果文档未指定该许可证的版本号,您可以选择自由软件基金会曾经发布的任何版本(非草案)。如果文档指定代理人可以决定使用该许可证的哪些未来版本,则该代理人公开接受某个版本的声明将永久授权您为该文档选择该版本。
"Massive Multiauthor Collaboration Site" (or "MMC Site") means any
World Wide Web server that publishes copyrightable works and also
provides prominent facilities for anybody to edit those works. A
public wiki that anybody can edit is an example of such a server. A
"Massive Multiauthor Collaboration" (or "MMC") contained in the site
means any set of copyrightable works thus published on the MMC site.
“大规模多作者协作站点”(或“MMC 站点”)指任何发布可版权作品且为任何人编辑这些作品提供显著设施的万维网服务器。任何人都可以编辑的公共维基就是此类服务器的一个例子。包含在该站点中的“大规模多作者协作”(或“MMC”)指在 MMC 站点上发布的任何一组可版权作品。
"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0
license published by Creative Commons Corporation, a not-for-profit
corporation with a principal place of business in San Francisco,
California, as well as future copyleft versions of that license
published by that same organization.
“CC-BY-SA”指由位于加利福尼亚州旧金山的非营利机构 Creative Commons Corporation 发布的知识共享署名-相同方式共享 3.0 许可证,以及该组织未来发布的该许可证的任何后续自由版权版本。
"Incorporate" means to publish or republish a Document, in whole or in
part, as part of another Document.
“合并”指将一个文档全部或部分作为另一个文档的一部分进行发布或重新发布。
An MMC is "eligible for relicensing" if it is licensed under this
License, and if all works that were first published under this License
somewhere other than this MMC, and subsequently incorporated in whole or
in part into the MMC, (1) had no cover texts or invariant sections, and
(2) were thus incorporated prior to November 1, 2008.
如果一个 MMC 是在本许可证下授权的,并且所有最初在本许可证下发布于该 MMC 以外的地方,随后全部或部分合并到该 MMC 中的作品,(1) 没有封面文本或不变部分,且 (2) 因此在 2008 年 11 月 1 日之前被合并,则该 MMC“有资格重新授权”。
The operator of an MMC Site may republish an MMC contained in the site
under CC-BY-SA on the same site at any time before August 1, 2009,
provided the MMC is eligible for relicensing.
MMC 站点的运营者可以在 2009 年 8 月 1 日之前的任何时间,在同一站点上以 CC-BY-SA 许可重新发布该站点中包含的 MMC,前提是该 MMC 有资格重新授权。










