Quantcast
Channel: VMware Communities : Discussion List - Nested Virtualization
Viewing all 1521 articles
Browse latest View live

Running nested Esxi under Esxi-- ubuntu(qemu)

$
0
0

Hi All

 

Just learning  Virtual world and trying install nested esxi. here is my setup.

 

I have ESXi 6.0 running

 

Under ESXI i have Debian as VM.

 

Under Debian iam running qemu.

 

Install Esxi inside Esxi 6.0 and export OVA to Qemu as qcow2

 

Running under Qemu i get CPU Error ( same Esxi vm working under Esxi ok)

 

I have attached image and config for reference, any help will be appreciated

 

BB.


Running ESXi under Ubuntu

$
0
0

Hi guys,

  could you help me please run ESXi 6 under Ubuntu 14.04.04  with kvm-qemu.

 

I have   Ubuntu 14.04.4 LTS

             ESX  6.0   vmware-201601001-3380124-iso

             Running hypervisor: QEMU 2.4.0

 

I try to run VM on EXSi  which running under KVM.  

So, ESXi was successfull installed,  the command in ps :

qemu-system-x86_64 -enable-kvm -name esx -S -machine pc-i440fx-2.4,accel=kvm,usb=off -cpu Nehalem,+invpcid,+vmx -m 8096 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid b2c3b37b-9596-98ab-8e0d-f05df89d2920 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/esx.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/esx-20.img,if=none,id=drive-ide0-0-0,format=raw -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=26,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=52:54:00:96:74:1c,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4

 

But I always get error like :

Virtualized INtel VT-x/EPT is not supported on this platform. Continure without virtualized msg.intel.hvhwmmu? during starting VM on ESXi

 

cat /proc/cpuinfo  on 0 host machine
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid

 

 

what is wrong ?   what kind of flags I should add for run VM on  1 host machine ( esxi) ?

Testing Container technologies in a virtual Fedora VM: systemd-nspawn

$
0
0

I need to test a Linux Fedora (and maybe other Linux distros) feature, Container Technologies (systemd-nspawn).

I want to test it using a VM hosted by a VMware Workstation host.

Have anybody made the same test?

How should I configure the VM to support such Technologies?

Regards

Marius

Nested ESXi Network Issue

$
0
0

I have two nested ESXi 6 machines running in a vCenter 5.5 Environment.  They are on and Isolated Standard vSwitch with a Windows 7 machine.  The Windows 7 machine can ping both of the ESXi 6 machines, but the ESXi 6 machines cannot ping each other.  Is there something simple I am missing??  Have been all over the internet trying to figure this out....

 

Any help would be greatly appreciated.  Need a test lab to start studying for VCP6 Certifications....

 

Nested Lab vSwitch.png

Install ESXi 6.0U2 in VMware Workstation Pro 12 (12.1.1) VM with SATA virtual Hard disk, but ESXi 6.0U2 installation cannot detect the SATA virtual hard drive

$
0
0

These two days I'm install ESXi in Workstation Pro 12.

 

(1) create VM in workstation, 4x cpu, 16GB member, create 100GB SATA harddrive, select OS ESXi 6.0, connect virtual CD to ESXi 6.0U2 iso file.

(2) Power on this VM to install ESXi ,  during the installation,  the installer cannot detect any SATA hard drive, so cannot proceed.

 

if I change the Hard drive type to SCSI, the installation has no problem,  installation and running is smooth, nested VM in ESXi also have no problem.

 

Any hints or idea?

 

I have tried to add the following to workstation vmx file, but no hlep

 

sata0.present = "TRUE"

sata0.virtualDev = "ahci"

 

sched.sata0:0.shares = "normal"
sched.sata0:1.shares = "normal"
sched.sata0:2.shares = "normal"
sched.sata0:3.shares = "normal"
sched.sata0:4.shares = "normal"

sched.sata0:0.throughputCap = "off"
sched.sata0:1.throughputCap = "off"
sched.sata0:2.throughputCap = "off"
sched.sata0:3.throughputCap = "off"
sched.sata0:4.throughputCap = "off"

 

thanks

Nested ESXi 6u2 Host on VMware Workstation v12 for Linux Network Problems

$
0
0

I Hello everyone,

 

I have a virtual lab environment running in VMware Workstation v12 for Linux, and my ESXi hosts are not working properly on one of the virtual networks. All of my troubleshooting suggests an issue with the nested ESXi hosts that may be a bug of some sort, but I want to make sure that I have done everything correctly first. That is why I'm posting here with the hope that if I did make a mistake that perhaps someone else can point it out to me.

 

Physical System

8 core Intel Xeon 2.1GHz CPU

128 GB of RAM

OS - Linux Mint 17.3 64-bit w/3.19.0-32-generic Linux kernel (fully updated as of this posting)

VMware Workstation 12 Pro - 12.1.1. build-3770994 (fully updated as of this posting)

 

Nested ESXi Host VM

ESXi v6.0.0 (Build 3825889, fully update as of this posting)

4 "physical" NICs (only 3 being used for now)

    - all use e1000 NIC virtual hardware, but also tried vmxnet3 NICs with no difference

    - vSwitch0 uses vmnic0 & 1 on virtual network vmnet16 in an active/standby pair

    - vSwitch1 uses vmnic2 on virtual network vmnet18

    - vmk0 used for management on vSwitch0, and vmk1 to be used for iSCSI on vSwitch1

    - promiscuous mode and forged transmits enabled on all vSwitch Port Groups (enabling or disabling these features makes no difference)

 

Testing Done So Far

I have verified that all IP adresses and netmasks being used are correct.

Using vmkping I have pinged other nodes on the vmnet16 network successfully.

Using vmkping I have attempted to ping other nodes on the vmnet18 network, but that has been unsuccessful.

I have depolyed other non-ESXi VMs onto the vmnet18 network, and they are able to ping each other but are unable to ping or be pinged by the ESXI host.

I have tried various virtual hardware NICs as mentioned before, but with no changes in results.

I have tried using LAN segments instead of the host-only network vmnet18 with no changes in results.

 

When I view the state of the ESXi host's NICs via vCenter or the embedded host client vmnic0 & 1 both show network information, but vmnic2 shows no networks. Yet I know that there is a network with various VMs communicating on it. Furthermore I was able to get all of this working on a Windows system running Workstation 10 (this is the laptop that my employer provides me with).

 

Having built working nested ESXi labs on various platforms as well as physical environments in the past I'm very confused as to why I can't get this particular setup to work. At this point my gut tells me that it is probably a bug of some sort with the nested ESXi hosts themselves. Since I can get everything working on vmnet16 including the ESXi host management and the VCSA that I am using I am certain that my vSwitch configuration is correct (other than IP space and vmnics the configurations are bascially identical). Since I can get other VMs to communicate over the vmnet18 network I do not see how this can be a VMware Workstation of Linux physical host issue. Is there something obvious that I am missing here? I have read about nested ESXi hosts running on VMware Workstation having known issues and bugs with networking. Has anyone else encountered this?

 

Thanks for any help that others can provide!

 

Regards,

 

Patrick

ESXi inside KVM

$
0
0

Hello!

 

I wonder, if it is possible to run ESXi 5.0/5.1 inside KVM (proxmox) ?

 

I need to run few for testing purposes. I cannot setup any other virtualization software.

 

I need only ESXi itself, even without virtual machines inside it:)

 

Thanks!

Unable to confngure iSCSI datastore

$
0
0

I created an ESXi 5.5 Update 2 VM hosted by a (physical) ESXi 5.5 Update 2 host member of a vSphere 5.5 infrastructure.

Everything works fine except the iSCSI software adapter and the iSCSI initiator.

I am unable to make ir connect to any kind of iSCSI target, hosted by either a Windows Server 2012 R2 VM and a Nas4Free VM.

I created a VMkernel port and assigned an IP, put it does not respond to ping: should it answer?

Is there any specific configuration I should implement?

How can I troubleshoot the iSCSI initiator?

Regards

marius


l can't install esxi5.5 in side esxi5.5 ....Although the server support individualization technology ؟؟

Not able to install windows server 2012 due to VT-x problems (64bit)

$
0
0

Hi guys,

 

I'm currently having some trouble with the installation of Windows Server 2012 in vmware 12 pro.

The picture below shows that i have no vtx, but i do have a 64bit processor so i am a little bit confused, can some help me?

 

b8b64e8302027a6bb40cff207f466433.png

 

Kind regards

VLANs in Nested ESXi Servers

$
0
0

In my lab environment, I have an ESXi 6 host and 2 ESXi 6 nested servers that are working with VLANs set up on a physical switch.

 

Host vSwitch is configured on VLAN 4095 for vmKernel and VM port group.

Nested Guest #1 has its vSwitch configured on VLAN 4095 for vmKernel, and I have multiple port groups for the various VLANs that are configured on the physical switch. (180-Management, 181-Application, 196-Provisioning, etc.)

Nested Guest #1 and Guest #2 has its vSwitch configured on VLAN 4095 for vmKernel, and I have multiple port groups for the various VLANs that are configured on the physical switch. (280-Management, 281-Application, 296-Provisioning, etc.)

 

Here is the hurdle I am having difficulty getting over. I need to set up 6 more servers just like this one with the same VLAN configuration connected to he same switch. Since you cannot have duplicate VLAN numbers in the same physical switch, I am looking for suggestions.

 

Is it possible to not have the VLANs configured in the physical switch and use virtual switches or an appliance to simulate the physical switch?

 

Really would appreciate any suggestions.

Nested vmware workstation on KVM host

$
0
0

I am trying to run an instance of vmware workstation on top of KVM.

 

First problem related to "incompatible hypervisor", which I fixed by adding "vmx.allowNested=True" in the .vmx file

This is apparently something I need to put in all my .vmx files.

 

Question 1: can I set some default value to allow nesting?

 

Second problem:

 

When I start a 64 bit virtual machine, I get

"Binary translation is incompatible with long mode on this platform. Long mode will be disabled in this virtual environment and applications requiring long mode will not function properly as a result. See http://vmware.com/info?id=152 for more details."

 

If I click ok to that I get

"This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible.

This host supports Intel VT-x, but the Intel VT-x implementation is incompatible with VMware Workstation.

For more detailed information, see http://vmware.com/info?id=152."

 

I have tested with KVM on the same virtual machine, and it has no problem running the same type of virtual machine. My test machine a new machine with the debian 8 installer in the "CD" drive,

 

Question 2:

How can I make this work?

Do you have any hints, links, references, examples or such that might help me?

 

Versions

- host is debian 8

- virtual machine is also debian 8

- vmware workstation pro 12.5

Vmware ESXI within Hyper-v (server 2012) networking issue

$
0
0

I am trying to get esxi running within hyper-v and it won't recognize the virtual switch even though I got it setup properly and it works with any other VMs on this system.

 

 

http://i.imgur.com/4rCA3.png

 

Hyper-V-Virtual-Switch-Issue-421.PNG

 

I only have one NIC but it works fine with other VMs.

install XenServer 7 as a VM on top of ESXi 5.5

$
0
0

Hi,

 

Has any one successfully installed Citrix XenServer 7 as a VM on top of ESXi 5.5?

I encountered hardware Dom0 crash during installation.  I have tried various tricks without any luck. 

I know I am not the only one who encounter this problem.  Someone was able to install XenServer 7 as a VM on top of ESXi 6 (Cant install XENSERVER 7 on esxi 5.5 - Server Installation - Discussions).

 

I was able to install Citrix XenServer 6.5 as a VM on top of ESXi 5.5, and also run VMs inside of XenServer 6.5.

 

Thanks for any tips.

 

Kong

OpenStack (KVM/QEMU) in vSphere - Occasionally segfaults occurring in OpenStack VMs with more than one vCPU configured

$
0
0

I am running OpenStack Newton as HA deployment on Ubuntu 16.04 vSphere VMs in conjuntion with a Ceph Storage (Jewel). The deployment comprises:

  • 2 controller VMs
  • 3 compute VMs
  • 3 storage VMs

Everything is up and running. There are no configuration issues known regarding the OpenStack environment. The same setup is working properly on "real" hardware-based machines.

 

Unfortunately, I am confronted with segfaults occasionally occurring at startup of my OpenStack VMs (tested with Ubuntu 14.04/16.04). These segfaults appear randomly in different Ubuntu services, unforeseeable where and when. Theses faults are definitely not software-related and appear only when OpenStack VMs are configured with more than 1 vCPU. The probability to create a broken OpenStack VM rises with the vCPU count which means segfaults occur more frequently in an OpenStack VM with 4 vCPUs than in one VM with only 2 vCPUs configured and never happen in VMs with only 1 vCPU. I was able to spawn and destroy 500 VMs successfully in series using only 1 vCPU.

 

ESX/ESXi-Version: VMware ESXi, 6.0.0, 3825889

 

I am using KVM/QEMU as hypervisor on my compute nodes, so there must be a problem when running KVM on ESXi-based nodes. Hardware virtualization support is activated for my vSphere VMs:

$ egrep -c '(vmx|svm)' /proc/cpuinfo

4

$ kvm-ok

INFO: /dev/kvm exists

KVM acceleration can be used


I have also tested using a different clock source for my vSphere VMs and switched from tsc to acpi_pm, but the issue is still occurring when more than 1 vCPU is configured in my OpenStack VMs. All OpenStack guest VMs use kvm-clock as clock source.


The problem must be related with ESXi, because KVM on real hardware works without any issues independently of how many vCPUs are configured for an OpenStack VM.


Any hints what to do?


KVM on ESXi causes segmentation faults in guest VMs with more than one vCPU

$
0
0

This post is related to this one:

OpenStack (KVM/QEMU) in vSphere - Occasionally segfaults occurring in OpenStack VMs with more than one vCPU configured

 

I was able to narrow down the problem described above. I have setup a vSphere VM based on Ubuntu 16.04 and installed KVM. I defined several guest VMs based on different Linux operating systems. When you add more than one vCPU to one  KVM guest VM, you occasionally receive segmentation faults in different processes unforeseeable where and when. The probability increases when you add more vCPUs and when the hosting machine is under high load.

 

I know that nested virtualization is not officially support, but we are so close. The scenario can be reproduced easily.

 

Is someone out there, who can support me getting KVM properly running?

 

Regards,

Jens

VM player runs smoothly in 5 years old PC, but super slow in an alienware laptop

$
0
0

I tested a few VMware player created vm, all hosting ubuntu, 12.04, 14.04 and 16.04. They all works smoothly in 5 years old PC, embarrassed to mention its specs. The confusing fact is that none of them works well in a new alienware 15R3 laptop, 16GB memory, SSD, Nvidia 1070 graphic card. Even a text editor needs wait time of a few second to tens of seconds to refresh the view.

Linux tool update has been conducted in the vm running in the new laptop. Before than after the update, no difference. This upgrade was not conducted in the old desktop and it runs smoothly.

8GB RAM allocation in the new laptop, 2 GB is more than enough, which has tested in the old desktop.

Disk defragmentation in windows has been conducted.

The VM ware player data file is one big file instead of spreading into multiple smaller files.

The VM ware player data file is stored in SSD drive.

Any suggestions?

Running ESX under KVM with VM-x/EPT

$
0
0

I've been experimenting with setting up a test lab on a linux host with KVM as the layer 1 hypervisor and ESX as the layer 2. With KVM configured for nested, ept and ignore_msrs, and ESX set to vmx.allowNested and hv.assumeEnabled things work fine, except that VMs under ESX don't use EPT.


With some help I'm now trying to determine which of these flags ESX requires for EPT to function, e.g. to get HWMMU working:

 

2016-02-11T21:17:00.157Z| vmx| I120: VPID and EPT Capabilities (0x00000d0106114041)
2016-02-11T21:17:00.157Z| vmx| I120:   R=0/W=0/X=1                      yes
2016-02-11T21:17:00.157Z| vmx| I120:   Page-walk length 3               yes
2016-02-11T21:17:00.157Z| vmx| I120:   EPT memory type WB               yes
2016-02-11T21:17:00.157Z| vmx| I120:   2MB super-page                   yes
2016-02-11T21:17:00.157Z| vmx| I120:   1GB super-page                    no
2016-02-11T21:17:00.157Z| vmx| I120:   INVEPT support                   yes
2016-02-11T21:17:00.157Z| vmx| I120:   Access & Dirty Bits               no
2016-02-11T21:17:00.157Z| vmx| I120:   Type 1 INVEPT                    yes
2016-02-11T21:17:00.157Z| vmx| I120:   Type 2 INVEPT                    yes
2016-02-11T21:17:00.157Z| vmx| I120:   INVVPID support                  yes
2016-02-11T21:17:00.157Z| vmx| I120:   Type 0 INVVPID                   yes
2016-02-11T21:17:00.157Z| vmx| I120:   Type 1 INVVPID                    no
2016-02-11T21:17:00.157Z| vmx| I120:   Type 2 INVVPID                   yes
2016-02-11T21:17:00.157Z| vmx| I120:   Type 3 INVVPID                   yes
...
2016-02-11T21:17:00.158Z| vmx| I120: MONITOR MODE: allowed modes          : BT32 HV HWMMU
2016-02-11T21:17:00.158Z| vmx| I120: MONITOR MODE: user requested modes   : BT32 HV HWMMU
2016-02-11T21:17:00.158Z| vmx| I120: MONITOR MODE: guestOS preferred modes: HWMMU BT32 HV
2016-02-11T21:17:00.158Z| vmx| I120: MONITOR MODE: filtered list          : HWMMU BT32 HV
2016-02-11T21:17:00.158Z| vmx| I120: HV Settings: virtual exec = 'hardware'; virtual mmu = 'hardware'

 

So far I've been able to get VMs to use HWMMU with the above flags active in a patched host kernel.

 

Could anyone tell me which flags ESX actually requires and which are optional or preferable for EPT/HWMMU to work?

 

Additionally, are more or different flags required with "vhv.enable = TRUE"? So far I have only tested with "vhv.enabled = FALSE"

 

It's a big performance jump to have this working so it would be very much appreciated. Thanks!

testing

Join the vCloud Connector 1.5 Beta

$
0
0

The public beta of vCloud Connector 1.5 was released today.  Join the beta to test out new features and more reliable workload transfers.  Highlights for this release include:

 

More Reliable Transfer of Workloads: Transfer virtual machines and templates between clouds more reliably and efficiently with features like:

Multi-part transfer, Built-in compression, Checkpoint Restart

 

Single Pane of Glass, Now through Web UI:

Continue to view VMs and templates across multiple clouds and perform basic operations such as power and console access within the vSphere Client

New: Also accessible through Web-based UI of vcloud.vmware.com

 

Support for latest version of vSphere (5.0) & vCloud Director (1.5)

 

Additional Enhancements

-Search for VM by name within a single cloud, management of server and node architecture

-Management of vCC Server & Node including update

-Internationalization ready, i18N Level 1: vCC can run on non-English OS and handle non-English text
(NOTE: vCC has not been localized to any language other than English)

 

Join the Beta Program today:

vmware.com/go/vcc1.5beta

Viewing all 1521 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>