安装过程:
到xen-4.0.0目录下
- make xen
- make tools
- make stubdom
- make install-xen
- make install-tools
- make install-stubdom
- make linux-2.6-xen-config --->这一步xen会从网上爬最新的2.6.18.8-xen源代码下来
- 拷贝/boot/config-XXX到build-linux-2.6.18-xen_x86_32/.config
- make linux-2.6-xen-config --->打开XEN支持:processor type and features-Subarchitecture Type(Xen-compatible); Xen---> Privileged Guest (domain 0)保存,退出
- vi build-linux-2.6.18-xen_x86_32/.config 添加几行(打开Transcendent Memory):
CONFIG_TMEM=y
CONFIG_PRECACHE=y
CONFIG_PRESWAP=y
- make linux-2.6-xen-build
- make linux-2.6-xen-install
- depmod 2.6.18.8-xen
- mkinitrd -v -f --with=aacraid --with=sd_mod --with=scsi_mod --builtin=ata_piix
initrd-2.6.18.8-xen
.img 2.6.18.8-xen
<---生成initrd
- cp initrd-2.6.18.8-xen
.img /boot/
configure grub:
default=0
timeout=10
splashimage=(hd0,0
)/grub/
splash.xpm.gz
hiddenmenu
# ============================================== #
# This part is for XEN
title xen
-dom0
kernel /xen
.gz dom0_mem=2048M
module /vmlinuz-2.6.18.8-xen
ro root=LABEL=/ console=tty0
module /initrd-2.6.18.8-xen
.img
# ============================================== #
title Red Hat Enterprise Linux Server (2.6.18-53.el5)
root (hd0,0
)
kernel /vmlinuz-2.6.18-53.el5 ro root=LABEL=/ rhgb quiet
crashkernel=128M@16M
initrd /initrd-2.6.18-53.el5.img
- reboot and wait...
- Set up Xend service with your linux box
# chkconfig --add xend
# chkconfig xend on
====================================================
打开TMEM的方法:
tmem需要xen的启动参数:tmem, 在/boot/grub/grub.conf 中这么修改:
kernel /xen
.gz dom0_mem=2048M
==>
kernel /xen
.gz dom0_mem=2048M tmem
tmem还有几个xen参数:
tmem_compress - 启动内存压缩
tmem_dedup - 启动内存去重(这个在4.0.1里面才有) 以上两个参数会导致tmem操作的效率下降10倍 @@“ anyway, 看操作系统怎么对待tmem了,要是没有频繁的get和put,应该也不是什么大问题呵。我先不启动这个了。
tmem_shared_auth - 为了隔离不同的VM而设置,具体原理还没有搞懂,看Transcendent Memory Internals这段描述:
"Shared pools and authentication.
When tmem was first proposed to the linux kernel mailing list
(LKML), there was concern expressed about security of shared ephemeral
pools. The initial tmem implementation only
required a client to provide a 128-bit UUID to identify a shared pool, and the
linux-side tmem implementation obtained this UUID from the superblock of the
shared filesystem (in ocfs2). It was
pointed out on LKML that the UUID was essentially a security key and any
malicious domain that guessed it would have access to any data from the shared
filesystem that found its way into tmem.
Ocfs2 has only very limited security; it is assumed that anyone who can
access the filesystem bits on the shared disk can mount the filesystem and use
it. But in a virtualized data center,
higher isolation requirements may apply.
As a result, a Xen boot option -- "tmem_shared_auth" -- was
added. The option defaults to disabled,
but when it is enabled, management tools must explicitly authenticate (or may
explicitly deny) shared pool access to any client.
On Xen, this is done with the "xm
tmem-shared-auth" command.
"
tmem_lock - 这个主要是调试的时候用的,请看Transcendent Memory Internals这段描述,咱平时用的时候不要打开,否则串行化所有的tmem操作将导致性能下降:
“A good locking strategy is critical to concurrency, but also
must be designed carefully to avoid deadlock and livelock
problems. For
debugging purposes, tmem supports a "big kernel lock" which disables
concurrency altogether (enabled in Xen with "tmem_lock", but note
that this functionality is rarely tested and likely has bit-rotted). Infrequent
but invasive tmem hypercalls, such as pool creation or the control operations,
are serialized on a single read-write lock
, called tmem_rwlock,
which must be held for writing. All other tmem operations must hold this lock
for reading, so frequent operations such as
put
and get
flush
can execute simultaneously
as long as no invasive operations are occurring.”