pci

How does Linux kernel discover PCI devices?

混江龙づ霸主 提交于 2020-12-04 11:07:32
问题 On the driver side, pci_register_driver() is called when a driver module is loaded, or at boot time if the module is built-in. (Whenever a device/driver is added, driver/device list is looped to find a match, I get that part.) But where/when are pci devices discovered and registered with the bus? I imagine this is arch specific, and would involve BIOS on x86, such as - BIOS routine probe PCI devices and then put the results in a list some where in RAM, before loading the kernel, and each list

How does Linux kernel discover PCI devices?

谁都会走 提交于 2020-12-04 11:07:01
问题 On the driver side, pci_register_driver() is called when a driver module is loaded, or at boot time if the module is built-in. (Whenever a device/driver is added, driver/device list is looped to find a match, I get that part.) But where/when are pci devices discovered and registered with the bus? I imagine this is arch specific, and would involve BIOS on x86, such as - BIOS routine probe PCI devices and then put the results in a list some where in RAM, before loading the kernel, and each list

How does Linux kernel discover PCI devices?

送分小仙女□ 提交于 2020-12-04 11:05:47
问题 On the driver side, pci_register_driver() is called when a driver module is loaded, or at boot time if the module is built-in. (Whenever a device/driver is added, driver/device list is looped to find a match, I get that part.) But where/when are pci devices discovered and registered with the bus? I imagine this is arch specific, and would involve BIOS on x86, such as - BIOS routine probe PCI devices and then put the results in a list some where in RAM, before loading the kernel, and each list

How does Linux kernel discover PCI devices?

…衆ロ難τιáo~ 提交于 2020-12-04 11:05:39
问题 On the driver side, pci_register_driver() is called when a driver module is loaded, or at boot time if the module is built-in. (Whenever a device/driver is added, driver/device list is looped to find a match, I get that part.) But where/when are pci devices discovered and registered with the bus? I imagine this is arch specific, and would involve BIOS on x86, such as - BIOS routine probe PCI devices and then put the results in a list some where in RAM, before loading the kernel, and each list

制作集成SATA驱动程序的Windows XP安装光盘

放肆的年华 提交于 2020-03-29 20:47:10
SATA 硬盘 有不少优点,但安装操作 系统 实在 麻烦 ,不仅要抓住机会按“F6”键加载STAT的驱动程序,还要制作带驱动程序的软盘。要命的是如果新机器没有配置软驱,那可就大眼瞪小眼了……来制作一张含SATA驱动程序的WinXP安装光盘吧。你会发现:原来加载SATA 硬盘 驱动其实很简单。 为了不再让WinXP的安装过程需要通过软盘加载SATA驱动程序,很多人都梦想 主板 的SATA驱动程序集成到安装光盘中,本文就来DIY一张集成SATA驱动程序的 Windows XP安装光盘! Windows XP安装目录及 文件 准备 打开资源管理器,在一个拥有足够磁盘空间(空闲空间至少为700MB;如果将生成后的光盘镜像也放在该分区,则应当保证有一张光盘的双倍容量的空间,即1.4GB左右)的硬盘分区上创建一个文件夹,比如“D:\XPSATA”,用于保存Windows XP的全部安装文件。 把一张完好的Windows XP安装光盘放入光驱,在资源管理器中将光盘中的文件全部选定,再复制到“D:\XPSATA”目录中。 准备驱动程序文件 一般随主板附送的驱动程序安装光盘中都有SATA的驱动程序。打开光盘目录,寻找名称中含有“SATA”或“RAID”字符的文件夹;也可在主板制造商的网站或其它驱动程序 下载 站点(如:驱动之家“ http://www.mydrivers.com ”等

infiniband学习总结

ぃ、小莉子 提交于 2020-03-24 04:00:25
一.什么是infiniband InfiniBand架构是一种支持多并发链接的“转换线缆”技术,它是新一代服务器硬件平台的I/O标准。由于它具有高带宽、低延时、 高可扩展性的特点,它非常适用于服务器与服务器(比如复制,分布式工作等),服务器和存储设备(比如SAN和直接存储附件)以及服务器和网络之间(比如LAN, WANs和the Internet)的通信 。 二.Infiniband产生的原因 随着CPU性能的飞速发展,I/O系统的性能成为制约服务器性能的瓶颈。于是人们开始重新审视使用了十几年的PCI总线架构。虽然PCI总线结构把数据的传输从8位/16位一举提升到32位,甚至当前的64位,但是它的一些先天劣势限制了其继续发展的势头。PCI总线有如下缺陷: (1)由于采用了基于总线的共享传输模式,在PCI总线上不可能同时传送两组以上的数据,当一个PCI设备占用总线时,其他设备只能等待; (2)随着总线频率从33MHz提高到66MHz,甚至133MHz(PCI-X),信号线之间的相互干扰变得越来越严重,在一块主板上布设多条总线的难度也就越来越大; (3)由于PCI设备采用了内存映射I/O地址的方式建立与内存的联系,热添加PCI设备变成了一件非常困难的工作。目前的做法是在内存中为每一个PCI设备划出一块50M到100M的区域,这段空间用户是不能使用的

kvm虚拟化之kvm虚拟机克隆

两盒软妹~` 提交于 2020-03-20 22:11:30
kvm虚拟机的克隆分为两种情况,本文也就通过以下两种情况进行克隆,克隆虚拟机为OEL5.8X64。 (1) KVM主机本机虚拟机直接克隆。 (2) 通过复制配置文件与磁盘文件的虚拟机复制克隆(适用于异机的静态迁移)。 1. 本机虚拟机直接克隆 (1) 查看虚拟机配置文件 [root@node1 ~]# cat /etc/libvirt/qemu/oeltest01.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit oeltest01 or other application using the libvirt API. --> <domain type='kvm'> <name>oeltest01</name> <uuid>8f2bb4a7-c7ed-32aa-3676-9fb05923269d</uuid> <memory unit='KiB'>524288</memory> <currentMemory unit='KiB'>524288</currentMemory> <vcpu