VirtualBox

Ticket #3925 (closed defect: wontfix)

Opened 5 years ago

Last modified 4 years ago

VNICs don't work inside OpenSolaris guests using bridged networking

Reported by: nsolter Owned by:
Priority: major Component: network
Version: VirtualBox 2.2.0 Keywords: VNIC crossbow
Cc: Guest type: Solaris
Host type: Solaris

Description

I'm trying to use a VNIC on top of an emulated physical adapter in an OpenSolaris guest in VirtualBox 2.2.0. I can create and configure the VNIC, but the network traffic I try to send on it doesn't seem to go anywhere.

Note that I'm not talking about VNICs on the host for bridged networking, but rather a VNIC on the guest itself. I do have bridged networking configured for the guest, in case that matters.

Change History

comment:1 Changed 5 years ago by ramshankar

Does the bridged networking work without the VNIC inside the guest?

comment:2 Changed 5 years ago by nsolter

The bridged networking is working fine with the emulated physical adapters, with or without VNICs configured inside the guests. It's only the VNIC interfaces inside the guests that can't communicate with any other interfaces.

comment:3 Changed 5 years ago by ramshankar

Please give output of sudo ifconfig -a from the guest. This most likely is a VNIC or VNIC configuration issue and not a VirtualBox issue as bridged networking is functioning properly.

comment:4 Changed 5 years ago by nsolter

Here's the configuration:

On the physical host (OpenSolaris 2009.06 build 111a):

nsolter@schubert:~# dladm show-link
LINK        CLASS    MTU    STATE    OVER
e1000g0     phys     1500   up       --
iwh0        phys     1500   down     --
vboxnet0    phys     1500   unknown  --
etherstub0  etherstub 9000  unknown  --
vnic0       vnic     9000   up       etherstub0
vnic1       vnic     9000   up       etherstub0
vnic2       vnic     9000   up       etherstub0
vnic3       vnic     9000   up       etherstub0
vnic4       vnic     9000   up       etherstub0
nsolter@schubert:~# dladm show-vnic
LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
vnic0        etherstub0   0      a:b:c:d:1:2          fixed               0
vnic1        etherstub0   0      a:b:c:d:1:3          fixed               0
vnic2        etherstub0   0      a:b:c:d:1:4          fixed               0
vnic3        etherstub0   0      a:b:c:d:1:5          fixed               0
vnic4        etherstub0   0      a:b:c:d:1:6          fixed               0
nsolter@schubert:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
	inet 127.0.0.1 netmask ff000000 
e1000g0: flags=1104843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,ROUTER,IPv4> mtu 1500 index 2
	inet 192.168.1.103 netmask ffffff00 broadcast 192.168.1.255
	ether 0:1c:7e:49:c3:ba 
vnic0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3
	inet 10.0.2.97 netmask ffffff00 broadcast 10.0.2.255
	ether a:b:c:d:1:2 
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
	inet6 ::1/128 

The guest is named mendelssohn. In the network settings, it has two network adapters enabled. Each is an "Intel PRO/1000 MT Desktop (8254OEM)". Each is attached to "Bridged Network". The first is using vnic1, the second using vnic3, with the MAC address specified correctly as the MAC address of those VNICs on the physical host.

Inside the guest (OpenSolaris 2009.06 build 111a):

The nwam service is disabled:

demo@mendelssohn:~# svcs network/physical
STATE          STIME    FMRI
disabled       14:40:45 svc:/network/physical:nwam
online         14:40:48 svc:/network/physical:default

10.0.2.98 is configured on e1000g0:

demo@mendelssohn:~# ifconfig e1000g0
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.0.2.98 netmask ffffff00 broadcast 10.0.2.255
        ether a:b:c:d:1:3 

This address can be pinged from the physical host:

nsolter@schubert:~# ping 10.0.2.98
10.0.2.98 is alive

Now I create a VNIC on e1000g1 and configure 10.0.2.110 on it:

demo@mendelssohn:~# dladm create-vnic -l e1000g1 vnic0
demo@mendelssohn:~# dladm show-vnic
LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
vnic0        e1000g1      1000   2:8:20:b7:76:cc      random              0
demo@mendelssohn:~# ifconfig vnic0 plumb
demo@mendelssohn:~# ifconfig vnic0 inet 10.0.2.110/24 up
demo@mendelssohn:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.0.2.98 netmask ffffff00 broadcast 10.0.2.255
        ether a:b:c:d:1:3 
vnic0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 10.0.2.110 netmask ffffff00 broadcast 10.0.2.255
        ether 2:8:20:b7:76:cc 
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128 
demo@mendelssohn:~# dladm show-link
LINK        CLASS    MTU    STATE    OVER
e1000g0     phys     1500   up       --
e1000g1     phys     1500   up       --
vnic0       vnic     1500   up       e1000g1
demo@mendelssohn:~# dladm show-vnic
LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
vnic0        e1000g1      1000   2:8:20:b7:76:cc      random              0

I can't connect to it from the physical host:

nsolter@schubert:~# ping 10.0.2.110
no answer from 10.0.2.110

However, if I configure 10.0.2.110 on e1000g1 directly:

demo@mendelssohn:~# ifconfig vnic0 inet down
demo@mendelssohn:~# ifconfig vnic0 unplumb
demo@mendelssohn:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.0.2.98 netmask ffffff00 broadcast 10.0.2.255
        ether a:b:c:d:1:3 
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128 
demo@mendelssohn:~# ifconfig e1000g1 plumb
demo@mendelssohn:~# ifconfig e1000g1 inet 10.0.2.110/24 up
demo@mendelssohn:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.0.2.98 netmask ffffff00 broadcast 10.0.2.255
        ether a:b:c:d:1:3 
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 10.0.2.110 netmask ffffff00 broadcast 10.0.2.255
        ether a:b:c:d:1:5 
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128 

Now I can connect to it from the physical host:

nsolter@schubert:~# ping 10.0.2.110
10.0.2.110 is alive

For more information, I suggest you try it out yourself. Note that I talked to some of the OpenSolaris networking folks last week, and they weren't surprised that this doesn't work. I think LDOMs have the same problem.

comment:5 Changed 5 years ago by ramshankar

After consulting with our Crossbow expert; A couple of things: Why is the etherstub MTU 9000? The guest thinks 1500 while the host 9000.

Also VNICS on the guest won't work because the host wouldn't know the guest VNIC's new-in-guest MAC address. This is why this won't work yet. Also why do you require VNICs within the guest as well?

comment:6 Changed 5 years ago by nsolter

I don't know why the MTU is 9000. I didn't set it explicitly.

There are a number of reasons that one might want to use VNICs within the guests. In my case, I was attempting to cluster two virtual machine instances together using the Open HA Cluster software. One new feature of Open HA Cluster allows VNICs to be used as endpoints for the private interconnect paths between the two cluster nodes. As a cluster developer, it would be helpful if I could test and demo this feature within virtual box.

comment:7 follow-up: ↓ 8 Changed 5 years ago by handoyog

Hi nsolter,

My OS is 2009.06, and I was using VNIC created from etherstub, but I was using it on VirtualBox running Fedora 11. It seems to me that Fedora on VirtualBox don't recognize the VNIC assigned.

If your problem is solved, please let me know.

comment:8 in reply to: ↑ 7 Changed 5 years ago by ramshankar

Replying to handoyog:

Hi nsolter,

My OS is 2009.06, and I was using VNIC created from etherstub, but I was using it on VirtualBox running Fedora 11. It seems to me that Fedora on VirtualBox don't recognize the VNIC assigned.

If your problem is solved, please let me know.

Fedora host?

comment:9 Changed 5 years ago by nsolter

handoyog: I haven't tried it recently, but since this bug is still open, I don't expect it will be fixed yet ;-)

comment:10 Changed 5 years ago by handoyog

For ramshankar,

No, OS2009.06 Host, Fedora 11 Guest and OS2009.06. Neither worked.

Somehow, this guy has managed to make it work :

 http://jblopen.com/node/8/

comment:11 Changed 5 years ago by nsolter

Handoyog,

No, what's described in  http://jblopen.com/node/8/ is not this bug. This bug is about using VNICs inside the guests themselves. That means running the dladm commands in the guests, not on the host.

comment:12 Changed 4 years ago by ramshankar

  • Status changed from new to closed
  • Resolution set to wontfix

Since it's not possible for the host to know of in-guest MAC addresses of the VNICs, this will not work. Closing this ticket as "invalid" for now due to it being beyond the scope of bridged networking semantics common to all platforms. If and when we utilize Crossbow on the host for providing bridged networking, we will explore the possibility of making this work. Thank you for the report.

Note: See TracTickets for help on using tickets.

www.oracle.com
ContactPrivacy policyTerms of Use