mtu

Scapy fails to send ipv6 packets

匿名 (未验证) 提交于 2019-12-03 10:24:21
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Hello everyone i am new here so please be kind. I have been using scapy lately in order to send and recieve ipv6 packets to selected ipv6 enabled servers. The gist of the code is here: text = line[:-1] #destination=getIPv6Addr(line[:-1]) destination="2607:f1c0:1000:60e0:7992:97f7:61b2:2814" source="2001:630:d0:f105:5cfe:e988:421a:a7b7" syn = IPv6(dst=destination,src=source) / TCP(sport=555,dport=80,flags="S")#flag S is syn packet syn.show() syn_ack = sr1(syn,timeout=11) When i execute the code hoever this is what i get: Begin emission:

set MTU in C programmatically

谁说胖子不能爱 提交于 2019-12-03 08:24:44
A client requested that the MTU limit should be 1492. Is there a way to do it in the source code (program in C)? Is there any other way to do it in general? (ifconfig?) Why does somebody needs to modify MTU to a certain limit? What is the benefit? And the most important: By changing the MTU is there any risk to break the code? It's not about speed directly; By increasing MTU you're reducing overhead, which is data that is responsible for the proper delivery of the package but it's not usable by the end user; This can have a increase in speed but only for heavy traffic; Also, if you increase

How to specify which eth interface Django test server should listen on?

匿名 (未验证) 提交于 2019-12-03 01:57:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: As the title says, in a multiple ethernet interfaces with multiple IP environment, the default Django test server is not attached to the network that I can access from my PC. Is there any way to specify the interface which Django test server should use? -- Added -- The network configuration is here. I'm connecting to the machine via 143.248.x.y address from my PC. (My PC is also in 143.248.a.b network.) But I cannot find this address. Normal apache works very well as well as other custom daemons running on other ports. The one who configured

k8s网络插件flannel模式剖析:vxlan、host-gw、directrouting

孤街醉人 提交于 2019-12-03 01:41:11
跨节点通讯,需要通过NAT,即需要做源地址转换。 k8s网络通信:   1) 容器间通信:同一个pod内的多个容器间的通信,通过lo即可实现; 2) pod之间的通信,pod ip <---> pod ip,pod和pod之间要不经过任何转换即可通信; 3) pod和service通信:pod ip <----> cluster ip(即service ip)<---->pod ip,他们通过iptables或ipvs实现通信,另外大家要注意ipvs取代不了iptables,因为ipvs只能做负载均衡,而做不了nat转换; 4) Service与集群外部客户端的通信 [root@master pki]# kubectl get configmap -n kube-system NAME DATA AGE coredns 1 22d extension-apiserver-authentication 6 22d kube-flannel-cfg 2 22d kube-proxy 2 22d kubeadm-config 1 22d kubelet-config-1.11 1 22d kubernetes-dashboard-settings 1 9h [root@master pki]# kubectl get configmap kube-proxy -o yaml -n

BLE different MTU for different implementations

匿名 (未验证) 提交于 2019-12-03 01:40:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I have tried different implementations of BLE connection on Android. One with RxAndroidBle and another one with simple Android API. I used RxAndroidBle example app for testing. I connect to the same peripheral with the same service and characteristic. Though when I read or get notifications from it in the case of RxAndroidBle I receive 512 bytes and in the case of Android API - just 20. I try to request MTU 512 but onMtuChanged is never called and I still receive 20. Do I miss something? 转载请标明出处: BLE different MTU for different

Requesting MTU with Bluetooth Low Energy connection on Android 4.3-4.4 (API 18-20)

匿名 (未验证) 提交于 2019-12-03 01:23:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I have a Bluetooth Low Energy Application which requires an MTU size above the default 23 bytes. Though Android introduced BluetoothGatt#requestMTU() in API 21, is there any way, including use of private APIs, to accomplish this pre API 21? 回答1: If you have control over the peripheral device, you can issue an MTU request (ATT_OP_MTU_REQ, opcode 0x02) from the peripheral. Android is capable of larger GATT MTUs ( update: 517 bytes is the maximum value ), if requested by the peripheral, and will happily send an according ATT_OP_MTU

Cluster hangs/shows error while executing simple MPI program in C

匿名 (未验证) 提交于 2019-12-03 01:03:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to run a simple MPI program(multiple array addition), it runs perfectly in my PC but simply hangs or shows the following error in the cluster. I am using open mpi and the following command to execute Netwok Config of the cluster(master&node1) MASTER eth0 Link encap:Ethernet HWaddr 00:22:19:A4:52:74 inet addr:10.1.1.1 Bcast:10.1.255.255 Mask:255.255.0.0 inet6 addr: fe80::222:19ff:fea4:5274/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16914 errors:0 dropped:0 overruns:0 frame:0 TX packets:7183 errors:0

k8s实践

匿名 (未验证) 提交于 2019-12-03 00:41:02
内网中k8s的相关配置 [root@k8s_node1 ~] # cat /etc/kubernetes/kubelet KUBELET_ARGS= " --cluster_dns=10.254.10.2 --pod_infra_container_image=gcr.io/google_containers/pause-amd64:3.0 " #注: --pod_infra_container_image的值根据实际pause的地址进行修改,一版传到内网私有registry上去后直接写内网地址即可 --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 systemctl restart kubelet k8s集群网络配置 flannel(覆盖网络) #A主机 1、容器直接使用目标容器的ip访问,默认通过容器内部的eth0发送出去。 2、报文通过veth pair被发送到vethXXX。 3、vethXXX是直接连接到虚拟交换机 docker0的 ,报文通过虚拟bridge docker0发送出去。 4、 查找路由表 ,外部容器ip的报文都会转发到flannel0虚拟网卡,这是一个P2P的虚拟网卡,然后报文就被转发到监听在另一端的flanneld

网络虚拟化中的 offload 技术:LSO/LRO、GSO/GRO、TSO/UFO、VXLAN

匿名 (未验证) 提交于 2019-12-03 00:40:02
网络虚拟化中的 offload 技术:LSO/LRO、GSO/GRO、TSO/UFO、VXLAN 2014年02月14日 16:42:11 阅读数:7731 offload 现在,越来越多的网卡设备支持 offload 特性,来提升网络收/发性能。offload 是将本来该操作系统进行的一些数据包处理(如分片、重组等)放到网卡硬件中去做,降低系统 CPU 消耗的同时,提高处理的性能。 包括 LSO/LRO、GSO/GRO、TSO/UFO 等。 LSO/LRO 分别对应到发送和接收两个方向,是 Large Segment Offload 和 Large Receive Offload。 首先来看 LSO。我们知道计算机网络上传输的数据基本单位是离散的网包,既然是网包,就有大小限制,这个限制就是 MTU(Maximum Transmission Unit)的大小,一般是1518字节。比如我们想发送很多数据出去,经过os协议栈的时候,会自动帮你拆分成几个不超过MTU的网包。然而,这个拆分是比较费计算资源的(比如很多时候还要计算分别的checksum),由 CPU 来做的话,往往会造成使用率过高。那可不可以把这些简单重复的操作 offload 到网卡上呢? 于是就有了 LSO,在发送数据超过 MTU 限制的时候(太容易发生了),OS 只需要提交一次传输请求给网卡,网卡会自动的把数据拿过来

Nmap IP分片发送学习

匿名 (未验证) 提交于 2019-12-03 00:25:02
Netutil.cc IP分片并发送函数解析 1、mtu是通信协议某一层上面所能通过的最大数据大小 2、分片过程: 将IP数据部分划分为fragment片,每片大小mtu,最后一片大小datalen % mtu(因为可能不足mtu) 划分好的每片长度fdatalen 3、ip包头中需要填充的字节: 需要填充的ip_off是相对数据部分的偏移,因此偏移字节为fragment - 1) * mtu ,又由于ip_off是以8字节为单位 发送函数 上面分割好的ip分片fpacket被拷贝到eth_frame + 14的位置 memcpy(eth_frame + 14, packet, packetlen); 之后发送出去 /* Send an IP packet over an ethernet handle. */ int send_ip_packet_eth(const struct eth_nfo *eth, const u8 *packet, unsigned int packetlen) { eth_t *ethsd; u8 *eth_frame; int res; eth_frame = (u8 *) safe_malloc(14 + packetlen); memcpy(eth_frame + 14, packet, packetlen); eth_pack_hdr(eth