dpvs
dpvs copied to clipboard
在虚拟机上一直报错,Cause: Fail to init traffic control: no memory,增大hugepages 也无法解决
请问,在虚拟机上一直报错,Cause: Fail to init traffic control: no memory,增大hugepages 也无法解决,如何解决啊
How many NUMA nodes (sockets) on your VM? If it's only one, pls try to modify DPVS_MAX_SOCKET
, other modification may needed.
# lscpu | grep NUMA
NUMA node(s): 2
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
@beacer 感谢了!
lscpu | grep NUMA
NUMA node(s): 1 NUMA node0 CPU(s): 0-7
修改DPVS_MAX_SOCKET=1? 其余还有哪些需要修改的啊?
重新编译后仍然 [root@dpvs-test bin]# ./dpvs & [1] 4244 [root@dpvs-test bin]# current thread affinity is set to FF EAL: Detected 8 lcore(s) EAL: Probing VFIO support... EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles ! EAL: PCI device 0000:00:03.0 on NUMA socket -1 EAL: probe driver: 1af4:1000 net_virtio EAL: PCI device 0000:00:04.0 on NUMA socket -1 EAL: probe driver: 1af4:1000 net_virtio CFG_FILE: Opening configuration file '/etc/dpvs.conf'. CFG_FILE: log_level = WARNING NETIF: dpdk0:rx_queue_number = 8 NETIF: worker cpu1:dpdk0 rx_queue_id += 0 NETIF: worker cpu1:dpdk0 tx_queue_id += 0 NETIF: worker cpu2:dpdk0 rx_queue_id += 1 NETIF: worker cpu2:dpdk0 tx_queue_id += 1 NETIF: worker cpu3:dpdk0 rx_queue_id += 2 NETIF: worker cpu3:dpdk0 tx_queue_id += 2 NETIF: worker cpu4:dpdk0 rx_queue_id += 3 NETIF: worker cpu4:dpdk0 tx_queue_id += 3 NETIF: worker cpu5:dpdk0 rx_queue_id += 4 NETIF: worker cpu5:dpdk0 tx_queue_id += 4 NETIF: worker cpu6:dpdk0 rx_queue_id += 5 NETIF: worker cpu6:dpdk0 tx_queue_id += 5 NETIF: worker cpu7:dpdk0 rx_queue_id += 6 NETIF: worker cpu7:dpdk0 tx_queue_id += 6 NETIF: worker cpu8:dpdk0 rx_queue_id += 7 NETIF: worker cpu8:dpdk0 tx_queue_id += 7 EAL: Error - exiting with code: 1 Cause: Fail to init traffic control: no memory
同问
有解决的方法?
http://dpdk.org/doc/guides/nics/overview.html#id1 虚拟机的网卡,选择列表中支持的。 如果是vmware,网卡选择e1000 ,非vmxnet3。
dpdk是支持vmware的vmxnet3。
http://www.dpdk.org/doc/guides/nics/vmxnet3.html
I ran dpvs on a single socket VM successfully. You can try the following:
- set
NETIF_MAX_SOCKETS
andDPVS_MAX_SOCKET
to 1 - configure only one lcore in
/etc/dpvs.conf
if NIC not support flow director or single rxq NIC - turn off
NETIF_PORT_FLAG_*_CSUM_OFFLOAD
if NIC not support hw csum offload
我在测试环境运行DPVS也出现“Cause: Fail to init traffic control: no memory”这个错误。 测试环境: VMware Workstation 12 vCPU : 1Socket 1Core vMem : 8G -- 12G vNIC :2张 e1000 (一张用于管理,一张用于DPDK) Linux Distribution: CentOS 7.3 Kernel: 3.10.0-514.el7.x86_64
DPVS的部署步骤是按照 Quick Start 中的步骤进行安装配置的(由于我的测试环境是一个NUMA节点,所以关于hugepage的步骤变更为:echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages)
@beacer 请问一下,DPVS的启动是否有最低内存的要求。或者是否可以通过修改什么配置可以降低DPVS对内存的需求。 由于条件限制,目前只能在虚拟化环境进行安装与测试。万望回复!!!
是不是单NUMA和内存大小没关系,2048可能是太小了,你可以尝试按照 @ywc689 的修改试试.如果你机器的内存很小,可以改一下参数,比如conn_pool_size之类,你可以看看rte_mempool_create调用的地方.
我的环境VMware使用centos7.4,两个网卡e1000,(一张用于管理,一张用于DPDK),1个socket 4个cpu,8G内存。
配置文件从dpvs.conf.single-nic.sample拷贝后修改如下:
[root@localhost dpvs-master]# cat /etc/dpvs.conf
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! This is dpvs default configuration file.
!
! The attribute "
! global config global_defs { log_level WARNING ! log_file /var/log/dpvs.log }
! netif config
netif_defs {
<init> device dpdk0 {
rx {
queue_number 8
descriptor_number 1024
rss all
}
tx {
queue_number 8
descriptor_number 1024
}
fdir {
mode perfect
pballoc 64k
status matched
}
! promisc_mode
kni_name dpdk0.kni
}
}
! worker config (lcores)
worker_defs {
<init> worker cpu1 {
type slave
cpu_id 1
port dpdk0 {
rx_queue_ids 0
tx_queue_ids 0
! isol_rx_cpu_ids 9
! isol_rxq_ring_sz 1048576
}
}
}
! timer config timer_defs { # cpu job loops to schedule dpdk timer management schedule_interval 500 }
! dpvs neighbor config
neigh_defs {
! dpvs ipv4 config
ipv4_defs {
forwarding off
! dpvs ipv6 config
ipv6_defs {
disable off
forwarding off
route6 {
! control plane config
ctrl_defs {
lcore_msg {
! ipvs config
ipvs_defs {
conn {
udp {
! defence_udp_drop
timeout {
normal 300
last 3
}
}
tcp {
! defence_tcp_drop
timeout {
none 2
established 90
syn_sent 3
syn_recv 30
fin_wait 7
time_wait 7
close 3
close_wait 7
last_ack 7
listen 120
synack 30
last 2
}
synproxy {
synack_options {
mss 1452
ttl 63
sack
! wscale
! timestamp
}
! defer_rs_syn
rs_syn_max_retry 3
ack_storm_thresh 10
max_ack_saved 3
conn_reuse_state {
close
time_wait
! fin_wait
! close_wait
! last_ack
}
}
}
}
! sa_pool config sa_pool { pool_hash_size 16 } 程序起来了但是报错如下: [root@localhost bin]# ./dpvs current thread affinity is set to F EAL: Detected 4 lcore(s) EAL: Probing VFIO support... EAL: PCI device 0000:02:01.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em EAL: PCI device 0000:02:05.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:100f net_e1000_em DPVS: dpvs version: 1.6-1, build on 2018.12.27.18:55:03 CFG_FILE: Opening configuration file '/etc/dpvs.conf'. CFG_FILE: log_level = WARNING NETIF: dpdk0:rx_queue_number = 8 NETIF: worker cpu1:dpdk0 rx_queue_id += 0 NETIF: worker cpu1:dpdk0 tx_queue_id += 0 NETIF: fail to flush FDIR filters for device dpdk0 DPVS: Start dpdk0 failed, skipping ... 查看ip: [root@localhost bin]# ./dpip addr show inet 192.168.66.200/32 scope global dpdk0 valid_lft forever preferred_lft forever sa_used 0 sa_free 4294950896 sa_miss 0 inet 192.168.66.50/24 scope global dpdk0 valid_lft forever preferred_lft forever 在别的设备上访问:192.168.66.50 ,可以ping通但不能访问80端口。
vmxnet3 e1000虚拟网卡均不行 @beacer @ywc689 建议给个虚拟机上跑dpvs的环境配置要求,这样便于个人开发测试,谢谢!
我用的物理机,2个numa 192G内存,大页分配了100G,DPVS1.9.2没问题,今天升级为1.9.4也报这个错误
DTIMER: [54] timer initialized 0x7ff9eeae36a0. DTIMER: [00] timer initialized 0x1636440. EAL: Error - exiting with code: 1 Cause: failed to init tc: no memory