Upgrade Brocade or Qlogic BR-1020 in ESXi

SOURCE:
-Download from
ESXi55-BCD-bna-3.2.6.0-00000-3204985.zip
BCD-ESXi5.5-bfa-3.2.6.0-00000-3000040.zip
brocade_adapter_boot_fw_v3-2-7-0.zip

Upgrade firmware
OPTION1
-download “Multi-Boot Code for BR- Series Adapters LiveCD” from
and boot from it
# bcu boot –update brocade_adapter_boot_fw_v3-2-7-0 -a

OPTION2
# unzip brocade_adapter_boot_fw_v3-2-7-0.zip
# esxcli brocade bcu –command=”boot –update brocade_adapter_boot_fw_v3-2-7-0 -a”

to install network driver
# unzip ESXi55-BCD-bna-3.2.6.0-00000-3204985.zip
# esxcli software vib install -d /tmp/ESXi55-BCD-bna-3.2.6.0-00000-offline_bundle-3204985.zip
# reboot

to install fc driver
# unzip BCD-ESXi5.5-bfa-3.2.6.0-00000-3000040.zip
# esxcli software vib install -d /tmp/BCD-ESXi5.5-bfa-3.2.6.0-00000-offline_bundle-3000040.zip
-Geting WWN’s for all storage adapters with ESXCLI
# esxcli storage core adapter list | grep -i fc | awk ‘{print $4}’
fc.20000005334856b6:10000005334856b6
fc.20000005334856b7:10000005334856b7

-to uninstall fc driver
# esxcli software vib remove -n scsi-bfa
# reboot
NOTE:
-to connect BR-1020 twinax, we need Brocade, Cisco or EMC active twinax

Upgrade ESXi 6.0 to 6.5

METHOD1 via CLI Offline
SOURCE:
-download VMware-ESXi-6.5.0-4564106-depot.zip from https://my.vmware.com/group/vmware/get-download?downloadGroup=ESXI650
-enable ssh on ESXi
-scp VMware-ESXi-6.5.0-4564106-depot.zip into ESXi /tmp
-Shutdown all VMs running on your ESXi host machine, put your host into maintenance mode and then connect to your ESXi server via SSH
# cd /tmp
# esxcli software profile update -p ESXi-6.5.0-4564106-standard -d /tmp/VMware-ESXi-6.5.0-4564106-depot.zip

# reboot

METHOD2 via CLI Online
SOURCE:
https://www.vladan.fr/how-to-upgrade-esxi-6-0-to-6-5-via-cli-on-line/
-enable ssh on ESXi
-Shutdown all VMs running on your ESXi host machine, put your host into maintenance mode and then connect to your ESXi server via SSH
# cd /tmp
# esxcli network firewall ruleset set -e true -r httpClient
# esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep -i ESXi-6.5
# esxcli software profile update -p ESXi-6.5.0-4564106-no-tools -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
# reboot

METHOD3 via ISO
SOURCE:
https://www.vladan.fr/how-to-upgrade-esxi-6-0-to-6-5-via-iso/
-download ESXi iso from https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/6_5
-burn iso into cd
-set BIOS to boot from CD
-reboot ESXi
-on boot select “Upgrade ESXi, preserve VMFS datastore”

METHOD4 via USB media
SOURCE:
https://www.vladan.fr/how-to-create-a-usb-media-with-esxi-6-5-installation/
-download ESXi iso from https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/6_5

-download and install YUMI Installer
YUMI
ALTERNATIVE1: Rufus
ALTERNATIVE2: UNetbootin


-run YUMI and burn iso into pen drive
-set BIOS to boot from USB
-reboot ESXi
-on boot select “Upgrade ESXi, preserve VMFS datastore”

METHOD5 via Update Manager
SOURCE:
https://www.vladan.fr/how-to-upgrade-a-esxi-6-0-to-esxi-6-5-via-vmware-update-manager/

 

-update ESXi to latest patches
METHOD1: CLI Offline
download ESXi latest patches from
https://my.vmware.com/web/vmware/details?downloadGroup=ESXI650D&productId=646&rPId=15839
scp ESXi650-201704001.zip into ESXi /vmfs/volumes//
# esxcli software vib update -d /vmfs/volumes//ESXi650-201704001.zip

METHOD2: CLI Online
# esxcli network firewall ruleset set -e true -r httpClient
# esxcli software profile install -p ESXi-6.5.0-20170404001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
-enable nested hypervisor
# echo “vhv.enable = “TRUE”” >> /etc/vmware/config

-enable copy/paste between guest vm
# vi /etc/vmware/config
add these
vmx.fullpath = “/bin/vmx”
isolation.tools.copy.disable=”FALSE”
isolation.tools.paste.disable=”FALSE”

-install VMware Host Client
go to https://labs.vmware.com/flings/esxi-embedded-host-client#instructions
download and scp into /tmp
esxui-signed-5214684.vib
# esxcli software vib install -v /tmp/esxui-signed-5214684.vib

Install VMware Remote Console
go to https://labs.vmware.com/flings/esxi-embedded-host-client#instructions
download and scp into /tmp
VMware-Remote-Console-9.0.0-Linux.vib
VMware-Remote-Console-9.0.0-MacOS.vib
VMware-Remote-Console-9.0.0-Windows.vib

# esxcli software vib update -v /tmp/VMware-Remote-Console-9.0.0-Linux.vib
# esxcli software vib install -v /tmp/VMware-Remote-Console-9.0.0-MacOS.vib

# esxcli software vib update -v /tmp/VMware-Remote-Console-9.0.0-Windows.vib

VMware Remote Console 9.0 for Linux
VMware Remote Console 9.0 for Mac

VMware Remote Console 9.0 for Windows

Now you can access ESXi using browser at https://esxserverip/ui

Proxmox Import/Export OVA

Import from OVA
NOTE:
-disk must be single file not splitted

-scp ova into /tmp. In my case I am using dsl-4-4-10.ova
# cd /tmp
# tar xf dsl-4-4-10.ova
# qemu-img convert -f vmdk DSL-4.4.10-disk1.vmdk -O qcow2 DSL-4.4.10.qcow2
-check dsl-4-4-10.ova configuration
# cat DSL-4.4.10.ovf | grep -e “Memory RAMSize” -e “CPU count” -e “Netw” -e “Disk”
What I found only disk size and nic quantity. The rest should be default
-create empty vm with 1 nic. Disk size could be any smallest size because I will overwrite later. Disk type could be vmdk or qcow2
# cp DSL-4.4.10.qcow2 /var/lib/vz/images/100/vm-100-disk-1.qcow2
-start the vm in proxmox

Export to OVA
-let say I already have vm with qcow disk
# cd /var/lib/vz/images/101
# qemu-img convert -f vm-101-disk-1.qcow2 -O /tmp/DSL-4.4.10-disk1.vmdk
-create an empty vm with the same OS, RAM, disk size, nic
NOTE:
-disk must be single file not splitted
-scp DSL-4.4.10-disk1.vmdk into your vm directory in vmware and rename disk name to vmname.vmdk
-click vm Settings
-click Hard Disk, click Remove
-click Hard Disk, click Add
I must choose same disk type as source vm disk. For example IDE or SCSI
Attach vmname.vmdk that we copied before
-click File/Export to OVF menu
name it vmname.ova click Save

ESXi 6.0 Unetlab to Cisco Catalyst trunk

I have problem with Unetlab inside ESXi with 2 trunk port.
Once 1 of the trunk cable disconnected, the issue fixed
The problems are:
-node (in the exmple below is Mikrotik) can’t ping gateway but unetlab vm can
-after ESXi restarted, I can’t ping ESXi anymore
The solution are
Image.png
Image.png
Cisco:
# sh run
port-channel load-balance src-dst-ip
interface Port-channel1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 switchport nonegotiate
 flowcontrol receive desired
interface FastEthernet2/0/1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 switchport nonegotiate
 speed 100
 duplex full
 flowcontrol receive desired
 channel-group 1 mode on
 spanning-tree portfast trunk
 spanning-tree bpdufilter enable
!
interface FastEthernet2/0/2
 switchport trunk encapsulation dot1q
 switchport mode trunk
 switchport nonegotiate
 speed 100
 duplex full
 flowcontrol receive desired
 channel-group 1 mode on
 spanning-tree portfast trunk
 spanning-tree bpdufilter enable
ESXi
Image.png
Image.png
Image.png
Image.png
Image.png
Image.png

Inject Driver into ESXi ISO

If you encounter can’t continue installing ESXi because lack of driver then follow these steps.
Example below is using HPE ESXi iso but you can use any ESXi iso

Download
-required driver from https://vibsdepot.v-front.de/wiki/index.php/List_of_currently_available_ESXi_packages
put all above into c:\download
click 2x ESXi-Customizer-v2.7.2.exe and extract into c:\download
click 2x c:\download\ESXi-Customizer-v2.7.2\ESXi-Customizer.cmd
Image.png
You can now burn iso into cd or into usb using https://rufus.akeo.ie/

Installing ESXi 6.0 Update 2

Download ESXi 6.0 update2 from
#vmware -v
VMware ESXi 6.0.0 build-2615704

#esxcli software vib install -n esx-base -n vsan -n vsanhealth -d /v mfs/volumes/datastore2/update-from-esxi6.0-6.0_update02.zip

Installation Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: VMware_bootbank_esx-base_6.0.0-2.34.3620759, VMware_bootbank_vsan_6.0.0-2.34.3563498, VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.2.34.3544323
   VIBs Removed: VMware_bootbank_esx-base_6.0.0-0.5.2615704
   VIBs Skipped:

#esxcli software vib update -d /vmfs/volumes/datastore2/update-from- esxi6.0-6.0_update02.zip

Installation Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.600.2.34.3620759, VMware_bootbank_esx-tboot_6.0.0-2.34.3620759, VMware_bootbank_lsi-mr3_6.605.08.00-7vmw.600.1.17.3029758, VMware_bootbank_lsi-msgpt3_06.255.12.00-8vmw.600.1.17.3029758, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-mpt2sas-plugin_1.0.0-4vmw.600.1.17.3029758, VMware_bootbank_misc-drivers_6.0.0-2.34.3620759, VMware_bootbank_net-e1000e_3.2.2.1-1vmw.600.1.26.3380124, VMware_bootbank_net-tg3_3.131d.v60.4-2vmw.600.1.26.3380124, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.600.2.34.3620759, VMware_bootbank_nvme_1.0e.0.35-1vmw.600.2.34.3620759, VMware_bootbank_sata-ahci_3.0-22vmw.600.2.34.3620759, VMware_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.600.0.11.2809209, VMware_bootbank_xhci-xhci_1.0-3vmw.600.2.34.3620759, VMware_locker_tools-light_6.0.0-2.34.3620759
   VIBs Removed: VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.600.0.0.2494585, VMware_bootbank_esx-tboot_6.0.0-0.0.2494585, VMware_bootbank_lsi-mr3_6.605.08.00-6vmw.600.0.0.2494585, VMware_bootbank_lsi-msgpt3_06.255.12.00-7vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-mpt2sas-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_misc-drivers_6.0.0-0.0.2494585, VMware_bootbank_net-e1000e_2.5.4-6vmw.600.0.0.2494585, VMware_bootbank_net-tg3_3.131d.v60.4-1vmw.600.0.0.2494585, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.600.0.0.2494585, VMware_bootbank_nvme_1.0e.0.35-1vmw.600.0.0.2494585, VMware_bootbank_sata-ahci_3.0-21vmw.600.0.0.2494585, VMware_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.600.0.0.2494585, VMware_bootbank_xhci-xhci_1.0-2vmw.600.0.0.2494585, VMware_locker_tools-light_6.0.0-0.0.2494585
   VIBs Skipped: VMWARE_bootbank_mtip32xx-native_3.8.5-1vmw.600.0.0.2494585, VMware_bootbank_ata-pata-amd_0.3.10-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.600.0.0.2494585, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-via_0.3.3-2vmw.600.0.0.2494585, VMware_bootbank_block-cciss_3.6.14-10vmw.600.0.0.2494585, VMware_bootbank_cpu-microcode_6.0.0-0.0.2494585, VMware_bootbank_elxnet_10.2.309.6v-1vmw.600.0.0.2494585, VMware_bootbank_emulex-esx-elxnetcli_10.2.309.6v-0.0.2494585, VMware_bootbank_esx-base_6.0.0-2.34.3620759, VMware_bootbank_esx-dvfilter-generic-fastpath_6.0.0-0.0.2494585, VMware_bootbank_esx-ui_1.0.0-3617585, VMware_bootbank_esx-xserver_6.0.0-0.0.2494585, VMware_bootbank_ima-qla4xxx_2.02.18-1vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.600.0.0.2494585, VMware_bootbank_lpfc_10.2.309.8-2vmw.600.0.0.2494585, VMware_bootbank_lsu-hp-hpsa-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-mptsas-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.600.0.0.2494585, VMware_bootbank_net-bnx2_2.2.4f.v60.10-1vmw.600.0.0.2494585, VMware_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.600.0.0.2494585, VMware_bootbank_net-cnic_1.78.76.v60.13-2vmw.600.0.0.2494585, VMware_bootbank_net-e1000_8.0.3.1-5vmw.600.0.0.2494585, VMware_bootbank_net-enic_2.1.2.38-2vmw.600.0.0.2494585, VMware_bootbank_net-forcedeth_0.61-2vmw.600.0.0.2494585, VMware_bootbank_net-igb_5.0.5.1.1-5vmw.600.0.0.2494585, VMware_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-nx-nic_5.0.621-5vmw.600.0.0.2494585, VMware_bootbank_nmlx4-core_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_nmlx4-en_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_nmlx4-rdma_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_qlnativefc_2.0.12.0-5vmw.600.0.0.2494585, VMware_bootbank_rste_2.0.2.0088-4vmw.600.0.0.2494585, VMware_bootbank_sata-ata-piix_2.12-10vmw.600.0.0.2494585, VMware_bootbank_sata-sata-nv_3.5-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-promise_2.12-3vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil24_1.1-1vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil_2.3-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-svw_2.3-3vmw.600.0.0.2494585, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.600.0.0.2494585, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.600.0.0.2494585, VMware_bootbank_scsi-aic79xx_3.1-5vmw.600.0.0.2494585, VMware_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.600.0.0.2494585, VMware_bootbank_scsi-fnic_1.5.0.45-3vmw.600.0.0.2494585, VMware_bootbank_scsi-hpsa_6.0.0.44-4vmw.600.0.0.2494585, VMware_bootbank_scsi-ips_7.12.05-4vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.600.0.0.2494585, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.600.0.0.2494585, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_vsan_6.0.0-2.34.3563498, VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.2.34.3544323

#reboot

#vmware -v
VMware ESXi 6.0.0 build-3620759

Shutdown CUCM together with ESXi

SOURCE:
-import APC PCNS 4.1 ova into your ESX
NOTE:
-I use APC PCNS for Linux only. I didn’t use its PCNS because I don’t have APC UPS with NMC.
If I have NMC, I’ll use PCNS instead of apcupsd or still using apcupsd with pcnet settings
If someone want to donate me APC NMC, I’ll glad to test for them

-login as root to pcns and install apcupsd

# yum -y install epel-release
# yum -y install apcupsd
# yum -y install putty

-for linux based apt-get use these commands
# apt-get install apcupsd
# apt-get putty-tools

# yum -y install openssh-clients

-test ssh using admin account

# ssh admin@<CUCM IP>
install pexpect using either these commands
# yum -y install pexpect.noarch
or
# apt-get install python-pexpect

# cat /root/shutcucm.py

import pexpect
import sys
server_ip = “<CUCM IP>”
server_user = “<platform user>”
server_pass = “<platform pass>”
child = pexpect.spawn(‘ssh %s@%s’ % (server_user, server_ip))
child.logfile = sys.stdout
child.timeout = 60
child.expect(‘password:’)
child.sendline(server_pass)
child.expect(‘admin:’)
child.sendline(‘utils system shutdown’)
child.expect(‘Enter (yes/no)?’)
child.sendline(‘yes’)
child.expect(‘ Appliance is being Powered – Off …’)
print ‘Shutdown command successfully sent.’

-connect usb cable from pc to esxi

# cat /etc/apcupsd/apccontrol

#!/bin/sh
prefix=/usr
exec_prefix=/usr
APCPID=/var/run/apcupsd.pid
APCUPSD=/usr/sbin/apcupsd
SHUTDOWN=/sbin/shutdown
SCRIPTSHELL=/bin/sh
SCRIPTDIR=/etc/apcupsd
WALL=wall
if [ -f ${SCRIPTDIR}/${1} -a -x ${SCRIPTDIR}/${1} ]
then
    ${SCRIPTDIR}/${1} ${2} ${3} ${4}
    # exit code 99 means he does not want us to do default action
    if [ $? = 99 ] ; then
        exit 0
    fi
fi
case “$1” in
    killpower)
        echo “Apccontrol doing: ${APCUPSD} –killpower on UPS ${2}” | ${WALL}
        sleep 10
        ${APCUPSD} –killpower
        echo “Apccontrol has done: ${APCUPSD} –killpower on UPS ${2}” | ${WALL}
    ;;
    commfailure)
        echo “Warning communications lost with UPS ${2}” | ${WALL}
    ;;
    commok)
        echo “Communications restored with UPS ${2}” | ${WALL}
    ;;
    powerout)
    ;;
    onbattery)
        echo “Power failure on UPS ${2}. Running on batteries.” | ${WALL}
    ;;
    offbattery)
        echo “Power has returned on UPS ${2}…” | ${WALL}
    ;;
    mainsback)
        if [ -f /etc/apcupsd/powerfail ] ; then
           printf “Continuing with shutdown.”  | ${WALL}
        fi
    ;;
    failing)
        echo “Battery power exhaused on UPS ${2}. Doing shutdown.” | ${WALL}
    ;;
    timeout)
        echo “Battery time limit exceeded on UPS ${2}. Doing shutdown.” | ${WALL}
    ;;
    loadlimit)
        echo “Remaining battery charge below limit on UPS ${2}. Doing shutdown.” | ${WALL}
    ;;
    runlimit)
        echo “Remaining battery runtime below limit on UPS ${2}. Doing shutdown.” | ${WALL}
    ;;
    doreboot)
        echo “UPS ${2} initiating Reboot Sequence” | ${WALL}
        ${SHUTDOWN} -r now “apcupsd UPS ${2} initiated reboot”
    ;;
    doshutdown)
        echo “UPS ${2} initiated Shutdown Sequence” | ${WALL}
        ${SHUTDOWN} -h now “apcupsd UPS ${2} initiated shutdown”
        python /root/shutcucm.py
        echo “****** Executing ESXi Shutdown Command ******” | ${WALL}
        plink -ssh -2 -pw password root@10.0.100.200 “/sbin/shutdown.sh && /sbin/poweroff”
    ;;
    annoyme)
        echo “Power problems with UPS ${2}. Please logoff.” | ${WALL}
    ;;
    emergency)
        echo “Emergency Shutdown. Possible battery failure on UPS ${2}.” | ${WALL}
    ;;
    changeme)
        echo “Emergency! Batteries have failed on UPS ${2}. Change them NOW” | ${WALL}
    ;;
    remotedown)
        echo “Remote Shutdown. Beginning Shutdown Sequence.” | ${WALL}
    ;;
    startselftest)
    ;;
    endselftest)
    ;;
    battdetach)
    ;;
    battattach)
    ;;
    *)  echo “Usage: ${0##*/} command”
        echo ”       warning: this script is intended to be launched by”
        echo ”       apcupsd and should never be launched by users.”
        exit 1
    ;;
esac

Fix Orphaned VM

SOURCE: http://orphanedvms.blogspot.com/
http://www.yellow-bricks.com/2011/11/16/esxi-commandline-work/

CLI
-login to ESXi
list vms
-get VMID
#vim-cmd vmsvc/getallvms
-find orphan vm
#vim-cmd vmsvc/power.getstate
It give vmware-vpxa es the result as that “VMControl error -11: No such virtual machine”
Try restarting, mgmt-vmware & vmware-vpxa daemons from esx console.

From the Direct Console User Interface (DCUI):

Connect to the console of your ESXi host.
Press F2 to customize the system.
Log in as root.
Use the Up/Down arrows to navigate to Restart Management Agents.
Note: In ESXi 4.1 and ESXi 5.0, 5.1, 5.5 and 6.0 this option is available under Troubleshooting Options.
Press Enter.
Press F11 to restart the services.
When the service has been restarted, press Enter.
Press Esc to log out of the system.
From the Local Console or SSH:
Log in to SSH or Local console as root.
Run these commands:
#/etc/init.d/hostd restart
#/etc/init.d/vpxa restart

If still finding the orphan, unregister and re-register the VM from ESX console with following command.

Unregister:
#vim-cmd /vmsvc/unregister <Vmid>

Register:
#vim-cmd /vmsvc/register /path/to/file.vmx

Easiest method
GUI
-Launch Virtual Center or Virtual Client
-Right click on the orphaned virtual machine
-Select ‘Remove from Inventory’
-Now go the summary page of the ESX host and select correct datastore
-Browse the datastore form the .vmx file of the VM
-Now locate the VMX file.
-Right click on the .vmx file of the VM and choose ‘Add to Inventory’
-Go through the wizard and your Virtual Machine should appear online again

Cloning a VM without vCenter in ESXi

-create an empty new vm from GUI

ssh to ESXi
# cd /vmfs/volumes/datastore1
# vmkfstools -i OLDVM/OLDVM.vmdk NEWVM/NEWVM.vmdk -d thin
# cd NEWVM

-make sure NEWVM.vmdk point to correct flat file
# cat NEWVM.vmdk
RW 83886080 VMFS “NEWVM_1-flat.vmdk”
# ls
NEWVM.vmsd           NEWVM.vmx            NEWVM-flat.vmdk  NEWVM.vmdk

-as you can see vmdk looking for the file that doesn’t exist
-so just rename the file file
# mv NEWVM_1-flat.vmdk NEWVM-flat.vmdk

or
modify NEWVM.vmdk
from
RW 83886080 VMFS “NEWVM_1-flat.vmdk”
to
RW 83886080 VMFS “NEWVM-flat.vmdk

-to rename vmdk, do not use cp or mv. Use vmkfstools instead
# vmkfstools -E OLDVM.vmdk NEWVM.vmdk

Nested ESXi

1. check whether your ESXi support nested ESXi or not
open browser and go to
check whether nestedHVSupported is True or False.
If True that mean supported. If False that mean our ESXi only support 32bit vm OS only
PARENT
2. Create TRUNK port for Nested ESXi
Image
3. Install Child ESXi with this configuration
-Choose Custom Configuration at the beginning, type a name for the machine (e.g. vESXi) and select a datastore for it
-Select Virtual Machine Version: 8
-For the Guest Operating System choose Other, in the Version dropdown select Other (64-bit), then enter ESX02
-For the CPUs select a configuration that results in at least 2 virtual cores (this can be either 1 socket and 2 cores per socket or 2 sockets and 1 core per socket)
-Memory: ESXi 5.5 requires a minimum of 4 GB
-Network: ESXi will work fine with just 1 NIC, but there are certain scenarios where you get warnings about missing redundancy. So, I usually use 2 NICs. Depending on the test scenarios that you are targeting you might also use more than 2
-Pick the default SCSI Controller LSI Logic Parallel
-If you want to have a local persistent scratch partition on the same disk then you need to configure a size of at least 5.5 GB. Even bigger sizes will result in a VMFS datastore being automatically created on the remainder of the disk
-After the VM has been created edit its General Options and change the Other (64-bit) to VMware ESXi 5.x in the Guest Operating System version dropdown. This is not possible in the New VM wizard, but now after the VM has been created (because running ESXi in ESXi is officially unsupported)
-remove the Floppy drive from the virtual hardware
-in Advanced Options / Boot Options raise the Power On Boot Delay to 5000 ms (or higher). After powering on the VM and opening its console this will give you some time to press ESC for the boot menu or F2 for the BIOS setup before the installed OS starts booting 4. Upgrade Child ESXi Hardware Version
if your physical host runs ESXi 5.5 then upgrading the VM this way will result in hardware version 10, and you will no longer be able to edit the VM’s configuration using the vSphere Client! In this case to upgrade to version 9 only we need to open an ESXi shell (see this KB article if you need instructions for doing this) and run the following commands
#vim-cmd vmsvc/getallvms
This will list all VMs that are registered on the host. Find the nested ESXi VM that you just created and note its vmid. Then run
#vim-cmd vmsvc/upgrade vmx-09
This will upgrade the VM with the id vmid to hardware version 9
PARENT & CHILD ESXi
5. Install latest patches
# esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep ESXi-5.5 | grep 2015

ESXi-5.5.0-20150104001-no-tools   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150204001-standard   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150101001s-no-tools  VMware, Inc.  PartnerSupported ESXi-5.5.0-20150204001-no-tools   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150104001-standard   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150101001s-standard  VMware, Inc.  PartnerSupported

#esxcli software profile install -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-20150204001-standard
CHILD ESXi
6. Open its firewall for outgoing http-requests
#esxcli network firewall ruleset set -e true -r httpClient
7. Install VMWare Tools special for nested ESXi
#reboot
#esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1
#sed -i “/\/system\/uuid/d” /etc/vmware/esx.conf
#/sbin/auto-backup.sh
8. Export this vm as OVA
Click ESX02 vm
Click menu File/Export OVF Template
Choose Format: Folder of files (OVF)
NOTE:
I got this error when import as OVA