Fix Orphaned VM


-login to ESXi
list vms
-get VMID
#vim-cmd vmsvc/getallvms
-find orphan vm
#vim-cmd vmsvc/power.getstate
It give vmware-vpxa es the result as that “VMControl error -11: No such virtual machine”
Try restarting, mgmt-vmware & vmware-vpxa daemons from esx console.

From the Direct Console User Interface (DCUI):

Connect to the console of your ESXi host.
Press F2 to customize the system.
Log in as root.
Use the Up/Down arrows to navigate to Restart Management Agents.
Note: In ESXi 4.1 and ESXi 5.0, 5.1, 5.5 and 6.0 this option is available under Troubleshooting Options.
Press Enter.
Press F11 to restart the services.
When the service has been restarted, press Enter.
Press Esc to log out of the system.
From the Local Console or SSH:
Log in to SSH or Local console as root.
Run these commands:
#/etc/init.d/hostd restart
#/etc/init.d/vpxa restart

If still finding the orphan, unregister and re-register the VM from ESX console with following command.

#vim-cmd /vmsvc/unregister <Vmid>

#vim-cmd /vmsvc/register /path/to/file.vmx

Easiest method
-Launch Virtual Center or Virtual Client
-Right click on the orphaned virtual machine
-Select ‘Remove from Inventory’
-Now go the summary page of the ESX host and select correct datastore
-Browse the datastore form the .vmx file of the VM
-Now locate the VMX file.
-Right click on the .vmx file of the VM and choose ‘Add to Inventory’
-Go through the wizard and your Virtual Machine should appear online again

Cloning a VM without vCenter in ESXi

-create an empty new vm from GUI

ssh to ESXi
# cd /vmfs/volumes/datastore1
# vmkfstools -i OLDVM/OLDVM.vmdk NEWVM/NEWVM.vmdk -d thin
# cd NEWVM

-make sure NEWVM.vmdk point to correct flat file
# cat NEWVM.vmdk
RW 83886080 VMFS “NEWVM_1-flat.vmdk”
# ls
NEWVM.vmsd           NEWVM.vmx            NEWVM-flat.vmdk  NEWVM.vmdk

-as you can see vmdk looking for the file that doesn’t exist
-so just rename the file file
# mv NEWVM_1-flat.vmdk NEWVM-flat.vmdk

modify NEWVM.vmdk
RW 83886080 VMFS “NEWVM_1-flat.vmdk”
RW 83886080 VMFS “NEWVM-flat.vmdk

-to rename vmdk, do not use cp or mv. Use vmkfstools instead
# vmkfstools -E OLDVM.vmdk NEWVM.vmdk

Nested ESXi

1. check whether your ESXi support nested ESXi or not
open browser and go to
check whether nestedHVSupported is True or False.
If True that mean supported. If False that mean our ESXi only support 32bit vm OS only
2. Create TRUNK port for Nested ESXi
3. Install Child ESXi with this configuration
-Choose Custom Configuration at the beginning, type a name for the machine (e.g. vESXi) and select a datastore for it
-Select Virtual Machine Version: 8
-For the Guest Operating System choose Other, in the Version dropdown select Other (64-bit), then enter ESX02
-For the CPUs select a configuration that results in at least 2 virtual cores (this can be either 1 socket and 2 cores per socket or 2 sockets and 1 core per socket)
-Memory: ESXi 5.5 requires a minimum of 4 GB
-Network: ESXi will work fine with just 1 NIC, but there are certain scenarios where you get warnings about missing redundancy. So, I usually use 2 NICs. Depending on the test scenarios that you are targeting you might also use more than 2
-Pick the default SCSI Controller LSI Logic Parallel
-If you want to have a local persistent scratch partition on the same disk then you need to configure a size of at least 5.5 GB. Even bigger sizes will result in a VMFS datastore being automatically created on the remainder of the disk
-After the VM has been created edit its General Options and change the Other (64-bit) to VMware ESXi 5.x in the Guest Operating System version dropdown. This is not possible in the New VM wizard, but now after the VM has been created (because running ESXi in ESXi is officially unsupported)
-remove the Floppy drive from the virtual hardware
-in Advanced Options / Boot Options raise the Power On Boot Delay to 5000 ms (or higher). After powering on the VM and opening its console this will give you some time to press ESC for the boot menu or F2 for the BIOS setup before the installed OS starts booting 4. Upgrade Child ESXi Hardware Version
if your physical host runs ESXi 5.5 then upgrading the VM this way will result in hardware version 10, and you will no longer be able to edit the VM’s configuration using the vSphere Client! In this case to upgrade to version 9 only we need to open an ESXi shell (see this KB article if you need instructions for doing this) and run the following commands
#vim-cmd vmsvc/getallvms
This will list all VMs that are registered on the host. Find the nested ESXi VM that you just created and note its vmid. Then run
#vim-cmd vmsvc/upgrade vmx-09
This will upgrade the VM with the id vmid to hardware version 9
5. Install latest patches
# esxcli software sources profile list -d | grep ESXi-5.5 | grep 2015

ESXi-5.5.0-20150104001-no-tools   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150204001-standard   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150101001s-no-tools  VMware, Inc.  PartnerSupported ESXi-5.5.0-20150204001-no-tools   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150104001-standard   VMware, Inc.  PartnerSupported ESXi-5.5.0-20150101001s-standard  VMware, Inc.  PartnerSupported

#esxcli software profile install -d -p ESXi-5.5.0-20150204001-standard
6. Open its firewall for outgoing http-requests
#esxcli network firewall ruleset set -e true -r httpClient
7. Install VMWare Tools special for nested ESXi
#esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1
#sed -i “/\/system\/uuid/d” /etc/vmware/esx.conf
8. Export this vm as OVA
Click ESX02 vm
Click menu File/Export OVF Template
Choose Format: Folder of files (OVF)
I got this error when import as OVA

ESXi date and time

To configure NTP on ESX/ESXi 4.1 and ESXi 5.x hosts using the vSphere Client:
Connect to the ESX/ESXi host using the vSphere Client.
Select a host in the inventory.
Click the Configuration tab.
Click Time Configuration.
Click Properties.
Click Options.
Click NTP Settings.
Click Add.
Enter the NTP Server name. For example,
Note: When entering the multiple NTP Server names, use a comma (,) followed by a space ( ) between the entries.
Click OK.
Click the General tab.
Click Start automatically under Startup Policy.
Note: It is recommended to set the time manually prior to starting the service.
Click Start and click OK.
Click OK to exit.
# cat /etc/ntp.conf
restrict default kod nomodify notrap nopeer
driftfile /etc/ntp.drift
#esxcli network firewall ruleset set –enabled=true –ruleset-id=ntpClient
#esxcli network firewall ruleset set –enabled=true –ruleset-id=updateManager
#esxcli network firewall ruleset set –enabled=true –ruleset-id=httpClient
#esxcli network firewall ruleset set –enabled=true –ruleset-id=iSCSI
#esxcli network firewall ruleset set –enabled=true –ruleset-id=syslog
#chkconfig –add ntpd
#date MMDDhhmmYYYY
#hwclock –systohc

#esxcli network ip dns server add –server=

#esxcli system hostname set –host=esx0
#esxcli system hostname set –
#vim-cmd vimsvc/license –set xxxxx-xxxxx-xxxxx-xxxxx-xxxx

Change ESXi disk type to SSD


My 1TB SSD drive shown as Non-SSD in Dell PowerEdge R320 RAID0 PERC.

How to trick ESXi to think it as SSD drive
1. check disk list
# esxcli storage nmp device list
Device Display Name: Local DELL Disk (naa.690b11c02d2926001b1dcaf00f26bc58)
Storage Array Type: VMW_SATP_LOCAL
Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device                                                       configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config: {preferred=vmhba1:C2:T0:L0;current=vmhba                                                      1:C2:T0:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba1:C2:T0:L0
Is Local SAS Device: false
Is USB: false
Is Boot USB Device: false

# esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d naa.690b11c02d2926001b1dcaf00f26bc58 -o enable_ssd
Verify SSD has been enabled
# esxcli storage nmp satp rule list | grep enable_ssd
VMW_SATP_LOCAL       naa.690b11c02d2926001b1dcaf00f26bc58     enable_local enable_ssd     user
# reboot
To change back to NON-SSD:
# esxcli storage nmp satp rule remove –satp VMW_SATP_LOCAL -d naa.690b11c02d2926001b1dcaf00f26bc58
# reboot

VMware Terminology


1 ALUA: Asymmetrical logical unit access, a storage array feature.
2 Auto Deploy: Technique to automatically install ESXi to a host.
3 Balloon driver: A memory management technique; reclaims guest VM memory via VMware Tools.
4 Cluster: A collection of hosts in a vSphere data center.
5 Configuration Maximums: Guidelines of how big a VM can be; see the newest for vSphere 5.5.
6 CPU Ready: The percentage of time that the VM is ready to get a CPU cycle (higher number is bad).
7 DAS: Direct attached storage, disk devices in a host directly.
8 Datacenter: Parent object of the vSphere Cluster.
9 Datastore: A disk resource where VMs can run.
10 DNS: Domain Name Service, a name resolution protocol. Not related to VMware, but it is imperative you set DNS up correctly to virtualize with vSphere.
11 DPM: Distributed Power Management, a way to shut down ESXi hosts when they are not being used and turn them back on when needed.
12 DRS: Distributed Resource Scheduler, Automatically move a high processing virtual machine to
another ESXi host that has more resources
13 Dynamic grow: A feature to increase the size of VMDK while the VM is running.
14 ESXi: The vSphere Hypervisor from VMware
15 FCoE: Fibre Channel over Ethernet, a networking and storage technology.
16 FT: Fault Tolerance, When a ESX host fails, virtual machines are replayed on another
ESX host.
• Required that HA be enabled
• Impact: Virtual machines enabled for FT are resumed without any
interruption of service
17 HA: High Availability, When a ESX host fails, virtual machines are restarted on another
ESX host.
• Impact: Virtual machines are restarted resulting in 5-10+ minutes to be operational again
• One ESXi host is the Master (Running), another ESXi host is the Slave (Connected)
18 HBA: Host Bus Adapter for Fibre Channel storage networks.
19 Host Profiles: Feature to deploy a pre-determined configuration to an ESXi host.
20 Hot-add: A feature to add a device to a VM while it is running, such as a VMDK.
21 Hypervisor swap: A memory management technique; puts guest VM memory to disk on the host.
22 IOPs: Input/Outputs per second, detailed measurement of a drive’s performance.
23 iSCSI: Ethernet-based shared storage protocol.
24 ISO: Image file, taken from ISO 9660file system for optical drives.
25 LUN: Logical unit number, identifies shared storage (Fibre Channel/iSCSI).
26 Maintenance mode: An administration technique where a host evacuates it’s running and powered off VMs safely before changes are made.
27 Memory compression: A memory management technique; applies a compressor to active memory blocks on the host.
28 MOB: Managed Object Reference, a technique vCenter uses to classify every item.
29 NAS: Network attached storage, a shared storage technique for file protocols (NFS).
30 Nested hypervisor: The ability to run ESXi as a VM either on ESXi, VMware Workstation, or VMware Fusion.
31 NFS: Network file system, a file-based storage protocol.
32 NSX: New technology virtualizing the network layer for VMware environments. Read more here.
33 NUMA: Non-uniform memory access, when multiple processors are involved their memory access is relative to their location.
34 NVRAM: A VM file storing the state of the VM BIOS.
35 Openstack: A cloud operating system that can leverage many hypervisors underneath, including ESXi.
36 OVA: Packaging of OVF, usually as a URL to download the actual OVF from a source Internet site.
37 OVF: Standards based format for delivering virtual appliances.
38 P2V: Physical to Virtual
39 PowerCLI: vSphere CLI that’s better than vCLI or vMA
40 pRDM: Physical mode raw device mapping, presents a LUN directly to a VM.
41 Quiesce: The act of quieting (pausing running processes) a VM, usually through VMware Tools.
42 Resource pool: A performance management technique, has DRS rules applied to it and contains one or more VMs, vApps, etc.
43 SAN: Storage area network, a shared storage technique for block protocols (Fibre Channel/iSCSI).
44 SAS: Drive type for local disks (also SATA).
45 Shares: Numerical value representing the relative priority of a VM.
46 SSD: Solid state disk, a non-rotational drive that is faster than rotating drives.
47 SSH to ESXi host: The administrative interface you want to use for troubleshooting if you can’t use the vSphere Client or vSphere Web Client.
48 Storage DRS Cluster: A collection SDRS objects (volumes, VMs, configuration).
49 Storage I/O Control: I/O prioritization for VMs.
50 Storage vMotion: A VM storage migration technique from one datastore to another.
51 Transparent page sharing: A memory management technique; eliminates duplicate blocks in host memory.
52 V2V: Virtual to Virtual
53 VAAI: vStorage APIs for Array Integration, the ability to offload I/O commands to the disk array.
54 VADP: vSphere APIs for Data Protection, a way to leverage the infrastructure for backups.
55 vApp: • Group liked virtual machines together
• Common configuration within that group
• Startup process
• Resource Pools
56 vCenter Configuration Manager: Part of vCloud Suite that automates configuration and compliance for multiple platforms.
57 vCenter Linked Mode: A way of pooling vCenter Servers, typically across geographies.
58 vCenter Orchestrator: An automation technique for vCloud environments.
59 vCenter Server Heartbeat: Will keep the vCenter Server available in the event a host fails which is running vCenter.
60 vCenter Server: Windows Server that run vCenter to manage multiple ESXi servers
61 vCenter Single Sign on: Authentication construct between components of the vCloud Suite.
62 vCenter Site Recovery Manager: An automated solution to prepare for a site failover event for the entire vSphere environment.
63 vCLI: vSphere Command Line Interface, allows tasks to be run against hosts and vCenter Server.
64 vCloud Automation Center: IT service delivery through policy and portals, get familiar with vCAC.
65 vCloud Director: Application to pool vCenter environments and enable self-deployment of VMs.
66 vCloud Networking and Security: Part of the vCloud Suite; provides basic networking and security functionality.
67 vCloud Suite: The collection of technologies to deliver the VMware Software Defined Data Center.
68 vCSA: vCenter Server Appliance, Linux Server that run vCenter to manage multiple ESXi servers
69 VDI: Virtual desktop infrastructure, also called DaaS (Desktop as a Service) from Horizon View; run as ESXi VMs and with vSphere.
70 vDS: vNetwork Distributed Switch, an enhanced version of the virtual switch.
71 Virtual Appliance: A pre-packed VM with an application on it.
72 Virtual hardware version: A revision of a VM that aligns to its compatibility. vSphere 5.5 is HW ver 10, vSphere 6.0 is HW ver 11
73 Virtual NUMA: Virtualizes NUMA with VMware hardware version 8 VMs.
74 VM Snapshot: A point-in-time representation of a VM.
75 VM: Virtual Machine
76 vMA: vSphere Management Assistant,allows administrators to run scripts or agents that interact with ESX/ESXi and vCenter Server systems without having to explicitly authenticate each time. vMA can also collect ESX/ESXi and vCenter Server logs and store the information for analysis
77 VMDK: The virtual machine disk format, containing the operating system of the VM. VMware’s virtual disk format.
78 VMEM: The page file of the guest VM.
79 VMFS: Virtual Machine File System for ESXi hosts, a clustered file system for running VMs.
80 vmkernel: Officially the “operating system” that runs ESXi and delivers storage networking for VMs
81 vMotion: Moving a running virtual machine from one ESXi host to
another host without loss in connectivity.
• CPU support
• Enhanced vMotion Compatibility (EVC).
82 VMSD: VM file for storing information and metadata about snapshots.
83 VMSN: Snapshot state file of the running VM.
84 VMSS: VM file for storing suspended state.
85 VMTM: VM file containing team data.
86 VM-VM affinity: Sets rules so two VMs should run on the same ESXi host or stay separated.
87 VMware Compatibility Matrix: List of supported storage, servers, and more for VMware technologies.
88 VMware Tools: A set of drivers for VMs to work correctly on synthetic hardware devices. Read more on VMware Tools.
89 VMware vCenter Mobile Access (vCMA): virtual appliance that is required to manage your datacenter from mobile devices such as smartphones and tablets
90 VMX: VM configuration file.
91 VMXF: Supplemental configuration file for when VMs are used in a team.
92 VOVA: A VMware appliance to test OpenStack for vSphere
93 vRDM: Virtual mode raw device mapping, encapsulates a path to a LUN specifically for one VM in a VMDK.
94 vSA: vSphere Storage Appliance is a software-based shared storage solution that enables high availability and automation in vSphere without shared storage hardware
95 vSAN: Virtual SAN, a new VMware announcement for making DAS deliver SAN features in a virtualized manner.
96 vShield Zones: A firewall for vSphere VMs.
97 vSphere Client: Administrative interface of vCenter Server.
98 vSphere DRS: Distributed Resource Scheduler, service that manages performance of VMs.
99 vSphere Fault Tolerance: An availability technique to run the networking, memory and CPU of a VM on two hosts to accommodate one host failure.
100 vSphere folder: An organizational construct, a great way to administer permissions and roles on VMs.
101 vSphere HA: High Availability, will restart a VM on another host if it fails.
102 vSphere Licensing: Different features are available as the licensing level increases, from free ESXi to Enterprise Plus.
103 vSphere role: A permissions construct assigned to users or groups.
104 vSphere SDRS: Storage DRS, manages free space and datastore latency for VMs in pools.
105 vSphere Web Client: Web-based administrative interface of vCenter Server.
106 vSphere: Collection of VMs, ESXi hosts, and vCenter Server.
107 vSwitch: A virtual switch, places VMs on a physical network.
108 VUM: vSphere Update Manager, a way to update hosts and VMs with latest patches, VMware Tools and product updates.
109 VXLAN: VMs with a logical network across different networks.

Install ESXi latest patches

ESXi Version 5.5
download latest patch from
upload an update to esxi host

# vmware -lv

# vim-cmd hostsvc/maintenance_mode_enter

Install latest patches
# esxcli software sources profile list -d | grep ESXi-5.5 | grep 2015

ESXi-5.5.0-20150104001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150204001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150101001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150204001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150104001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150101001s-standard  VMware, Inc.  PartnerSupported

#esxcli software profile install -d -p ESXi-5.5.0-20150204001-standard
# vim-cmd hostsvc/maintenance_mode_exit && reboot
# vmware -lv

ESXi and VCSA version 6

Download VMware pacthes from

-download and upload into ESXi /tmp
#cd /tmp
#esxcli software vib install -d /tmp/

To patch VCSA
-mount VCSA iso patch using vSphere Client into VCSA vm
ssh to VCSA
L: root P:
>software-packages install –iso
When it ask Yes/No in EULA, click Yes
If you skip that, just type
>software-packages install –staged
to check installed package
>software-packages list –staged
reboot VCSA
>shutdown reboot -r

To find out current ESXi boot disk

I have 2 boot disks in my ESXi.

1x32GB pen drive
Q. From which disk its currently booted?
A. From these 2 commands, we found its booted from SSD
# fdisk -lu
shown 1754529792 sectors devided by 2 = 877264896 KB = 877 GB = around 1TB
# fdisk -lu
*** The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil
Found valid GPT with protective MBR; using GPT
Disk /dev/disks/naa.690b11c02d2926001b1dcce40a9c7d66: 1754529792 sectors, 1673M
Logical sector size: 512
Disk identifier (GUID): b36e60cd-96dd-41da-a40b-7794cab5c983
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1754529758
Number  Start (sector)    End (sector)  Size       Code  Name
1              34          262177        256K   0700  Microsoft reserved partition
2          264192      1754527743       1672M   0700  Basic data partition
Found valid GPT with protective MBR; using GPT
Disk /dev/disks/naa.690b11c02d2926001b1dcaf00f26bc58: 1952448512 sectors, 1862M
Logical sector size: 512
Disk identifier (GUID): 4c3060e3-dc8f-4a39-95c4-052556e22d2d
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1952448478
Number  Start (sector)    End (sector)  Size       Code  Name
1              64            8191        8128   0700
2         7086080        15472639       8190K   0700
3        15472640      1952448478       1847M   0700
5            8224          520191        499K   0700
6          520224         1032191        499K   0700
7         1032224         1257471        219K   0700
8         1257504         1843199        571K   0700
9         1843200         7086079       5120K   0700
# ls -lisa /dev/disks/
total 3706976145
4      0 drwxr-xr-x    1 root     root           512 Feb 17 18:05 .
1      0 drwxr-xr-x    1 root     root           512 Feb 17 18:05 ..
148 976224256 -rw——-    1 root     root     999653638144 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58
132   4064 -rw——-    1 root     root       4161536 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:1
134 4193280 -rw——-    1 root     root     4293918720 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:2
136 968487919 -rw——-    1 root     root     991731629568 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:3
138 255984 -rw——-    1 root     root     262127616 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:5
140 255984 -rw——-    1 root     root     262127616 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:6
142 112624 -rw——-    1 root     root     115326976 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:7
144 292848 -rw——-    1 root     root     299876352 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:8
146 2621440 -rw——-    1 root     root     2684354560 Feb 17 18:05 naa.690b11c02d2926001b1dcaf00f26bc58:9
155 877264896 -rw——-    1 root     root     898319253504 Feb 17 18:05 naa.690b11c02d2926001b1dcce40a9c7d66
151 131072 -rw——-    1 root     root     134217728 Feb 17 18:05 naa.690b11c02d2926001b1dcce40a9c7d66:1
153 877131776 -rw——-    1 root     root     898182938624 Feb 17 18:05 naa.690b11c02d2926001b1dcce40a9c7d66:2
149      0 lrwxrwxrwx    1 root     root            36 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048 -> naa.690b11c02d2926001b1dcaf00f26bc58
133      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:1 -> naa.690b11c02d2926001b1dcaf00f26bc58:1
135      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:2 -> naa.690b11c02d2926001b1dcaf00f26bc58:2
137      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:3 -> naa.690b11c02d2926001b1dcaf00f26bc58:3
139      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:5 -> naa.690b11c02d2926001b1dcaf00f26bc58:5
141      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:6 -> naa.690b11c02d2926001b1dcaf00f26bc58:6
143      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:7 -> naa.690b11c02d2926001b1dcaf00f26bc58:7
145      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:8 -> naa.690b11c02d2926001b1dcaf00f26bc58:8
147      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcaf00f26bc58504552432048:9 -> naa.690b11c02d2926001b1dcaf00f26bc58:9
156      0 lrwxrwxrwx    1 root     root            36 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcce40a9c7d66504552432048 -> naa.690b11c02d2926001b1dcce40a9c7d66
152      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcce40a9c7d66504552432048:1 -> naa.690b11c02d2926001b1dcce40a9c7d66:1
154      0 lrwxrwxrwx    1 root     root            38 Feb 17 18:05 vml.0200000000690b11c02d2926001b1dcce40a9c7d66504552432048:2 -> naa.690b11c02d2926001b1dcce40a9c7d66:2

Backup/Restore ESXi config


It’s important to rerun backup every time you update patches, otherwise you can’t restore the configuration

-Download and install vSphere SDK for Perl from
-click Run Search PowerShell
-right click Windows PowerShell/Run as Administrator
PS C:\WINDOWS\system32> Set-ExecutionPolicy remotesigned
Execution Policy Change
The execution policy helps protect you from scripts that you do not trust. Changing the execution policy might expose
you to the security risks described in the about_Execution_Policies help topic at Do you want to change the execution policy?
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is “Y”):
Download PowerCLI from
right click VMware-PowerCLI-5.8.0-2057893.exe/Run as Administrator and install
-click Run Search PowerShell
-right click Windows PowerShell/Run as Administrator
PS C:\WINDOWS\system32> Set-ExecutionPolicy remotesigned
Execution Policy Change
The execution policy helps protect you from scripts that you do not trust. Changing the execution policy might expose
you to the security risks described in the about_Execution_Policies help topic at Do you want to change the execution policy?
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is “Y”):
PS>cd  C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI\Scripts


Download and install vMA from
Method1 PowerCLI
PS> Connect-VIServer -user root -password secret
PS> Get-VMHost | Get-VMHostFirmware -BackupConfiguration -DestinationPath c:\download
Host            Data
—-            —-      c:\download\configBundle-

Method2 ESXi CLI

To synchronize the configuration changed with persistent storage, run the command:
#vim-cmd hostsvc/firmware/sync_config
To backup the configuration data for an ESXi host, run the command:
#vim-cmd hostsvc/firmware/backup_config

Note: The command should output a URL in which a web browser may be used to download the file. The backup file is located in the /scratch/downloads directory as configBundle-<HostFQDN>.tgz

Method3 vSphere CLI

open menu Start/Programs/VMware/VMware vSphere CLI/Command Prompt

> –server= –username=root –password=secret -s c:\Download\ESXi_backup.txt

Method4 vMA

#vicfg-cfgbackup –server –username=root –password=secret -s /root/ESXi010115.txt


Method1 PowerCLI
PS> Connect-VIServer -user root -password secret
-power off all vms
PS> Set-VMHost -VMHost -State Maintenance

Name                 ConnectionState PowerState NumCpu CpuUsageMhz CpuTotalMhz   MemoryUsageGB   MemoryTotalGB Version
—-                 ————— ———- —— ———– ———–   ————-   ————- ——-           Maintenance     PoweredOn       8         416       18392           6.330          95.938   5.5.0

PS> Set-VMHostFirmware -VMHost -Restore -SourcePath c:\download\configBundle- -HostUser root -HostPassword secret

Method2 ESX CLI

Put the host into maintenance mode by running the command:
>vim-cmd hostsvc/maintenance_mode_enter
Copy the backup configuration file to a location accessible by the host and run the command:
In this case, the configuration file was copied to the host’s /tmp directory. For more information, see Using SCP tocopy files to or from an ESX host (1918).
>vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz

Note: Executing this command will initiate an automatic reboot of the host after command completion.

Method3 vSphere CLI

C:\Program Files (x86)\VMware\VMware vSphere CLI\bin>perl.exe –server –username root –password secret -o enter

Host entered into maintenance mode successfully.

C:\Program Files (x86)\VMware\VMware vSphere CLI\bin>perl.exe –server –username root –password secret -f -l c:\Download\ESXi_backup.txt

C:\Program Files (x86)\VMware\VMware vSphere CLI\bin>perl.exe –server –username root –password secret -o exit

Method4 vMA

NOTE: vMA must not under ESXi that going to be Maintenance
#vicfg-hostops –server –username=root –password=secret -o enter
#vicfg-cfgbackup –server –username=root –password=secret -f -l /root/ESXi010115.txt
#vicfg-hostops –server –username=root –password=secret -o exit

Reset password

Aruba Controller:
Please login using console with a serial cable (e.g. you must be infront of the controller):
Login : password
Password: forgetme!

Then go into enable mode with pwd “enable”
#Config terminal
(config)#Mgmt-user admin root
#write memory
<hit enter to setup the new root password>
Once done logout and login back in with the new password.
– If you are looking to decrypt the wireless security key which you have setup for your wireless network. Please execute #encrypt disable and then execute #show run, under the config you will see the wireless key in clear text under your VAP profile section.
sometimes you have the admin password of the controller but not have the enable mode password so what to do?
Access the Controller via GUI And change the enable mode password in Controller Wizard.
Navigate to Configuration > Controller Wizard > Under Wizards > Configure Controller >Basic Info> Enter any Name of your choice, Password for User Admin, retype the same, Password for Enable mode Access here is the place where we can reset the enable mode password and retype the same click on Next

1. Connect Console cable
2. Reboot the router and press the Break key to interrupt the boot sequence.

For break key sequences

Software Platform Operating System Try This
Hyperterminal IBM Compatible Windows XP Ctrl-Break
Hyperterminal IBM Compatible Windows 2000 Ctrl-Break
Hyperterminal IBM Compatible Windows 98 Ctrl-Break
Hyperterminal (version 595160) IBM Compatible Windows 95 Ctrl-F6-Break
Kermit Sun Workstation UNIX Ctrl-\l
MicroPhone Pro IBM Compatible Windows Ctrl-Break
Minicom IBM Compatible Linux Ctrl-a f
ProComm Plus IBM Compatible DOS or Windows Alt-b
SecureCRT IBM Compatible Windows Ctrl-Break
Telix IBM Compatible DOS Ctrl-End
Telnet N/A N/A Ctrl-], then type send brk
Telnet to Cisco IBM Compatible N/A Ctrl-]
Teraterm IBM Compatible Windows Alt-b
Terminal IBM Compatible Windows Break
Tip Sun Workstation UNIX Ctrl-], then Break or Ctrl-c
VT 100 Emulation Data General N/A F16
Windows NT IBM Compatible Windows Break-F5
Shift-6 Shift-4 Shift-b (^$B)
Z-TERMINAL Mac Apple Command-b
N/A Break-Out Box N/A Connect pin 2 (X-mit) to +V for half a second
Cisco to aux port N/A Control-Shft-6, then b
IBM Compatible N/A Ctrl-Break

3. reset
rommon 1 > confreg 0x2142
You must reset or power cycle for new config to take effect
rommon 2 > reset

4. Change the password
Type no after each setup question, or press Ctrl-C in order to skip the initial setup procedure
Router> enable
Router# copy startup-config running-config
Destination filename [running-config]? (hit enter)
Building configuration…
Router# configure terminal
Router(config)# enable password cisco
Router(config)# enable secret cisco
Router(config)# line console 0
Router(config-line)# password cisco
Router(config)# username cisco privilege 15 secret cisco
Router(config)# config-register 0x2102
Router(config)# exit
Router# copy running-config startup-config
Destination filename [startup-config]? (hit enter)
Building configuration…
Router# reload

Netscaler MPX / VPX

Now from time to time you might come by this, you have a customer which has a Netscaler setup and they have forgotten the password for the device. What do you do ?
If you have a MPX you need to connect to the device using a serial cable and use for instance Putty to connect to the serial port. If you have an VPX you just need to open the console. Now when the device boots you need to press CTRL + C now on the VPX it is simple the boot menu appears


Then you just press 4 and go into single user mode. On the MPX we have to press CTRL + C simultaneously as well when the following appears in the console
Press [Ctrl-C] for command prompt, or any other key to boot immediately.
Booting [kernel] in 2 seconds…
Now to start the MPX in single-user mode you have to type either boot –s or reboot — -s torestart in single user mode. When you are in single user mode the console will look like this.


Next we have to mount the flash device since this is where the config file resides. Now on different devices this flash device has different names
For VPX this device is called /dev/ad0s1a
So first we have to check disk consistency first before we can mount the device.
fsck /dev/da0s1a (This checks disk consistency)
mount/dev/da0s1a/flash (This mounts the drive under the folder /flash )
df –l (List the devices and where they are mounted)


Next we need to change directory to the flash drive where the config file is located.
cd /flash/nsconfig from there


Next we use a grep command to create a new config file but without the line which contains the passoword string.
grep –v “set system user nsroot” ns.conf > new.conf
Next we need to rename the current config to another name
mv ns.conf old.ns.conf
mv new.conf ns.conf

After this is done we have a new config file without the password for nsroot and we can reboot.

At the boot screen, you will see “SYSLINUX 4.02 … Boot:”, write to the next “menu.c32”
Secondly, you can see a blue window. Move to “xe-serial” and press “tab”
Now, you can read command line start with “mboot.c32…..”. You have to change this part of the line “xencons=hvc console=hvc0” and write “console= ttySO,115200n8 single”.
And press “Enter”, the server continue the starting process.
Then you can see the command line interface, write “passwd” to change your root password

1. Connect Console cable and launch putty
2. Power on Cyberoam and continously press Enter until you see CRLoader
You are navigated to CRLoader screen. Go to Option 0 – CRLoader and Press Enter
Select Option 2 – Troubleshoot
Select Option 1 – Reset console password
This would reset the admin user password. Press “Ok” to continue
Select Option 5 – Reboot. This will reboot the appliance
Once Cyberoam is rebooted, Enter the Default Password as “admin” and then CLI access will be available

1. When booted press e
2. Change i.e
press Enter
press b

3. Changing password
After booted, # will appear
# mount -a
# passwd root
New BIG-IP password:
Retype new BIG-IP password:
SN: FGT-603907516189
L: maintainer
P: bcpbFGT-603907516189

1. Press Clear hole for 10s
Once you release the “Clear” button, only the password protection will be removed. All other configuration settings will remain intact, and the switch will not reboot
If you would like to disable the clear password button on the front of the HP Procurve switch then enter the following
>conf t
Switch(config)# no front-panel-security password-clear
You will also notice the reset button next to the clear button. To disable this button enter the following.
Switch(config)# no front-panel-security factory-reset
Both buttons are now disabled.If you would like to enable these buttons again, do so with the commands below.
Switch(config)# front-panel-security password-clear
Switch(config)# front-panel-security factory-reset

Finally if you are unsure of the status of the reset and clear buttons on the procurve switch then enter the following.
Switch(config)# show front-panel-securityJuniper:

1. Connect your Console cable with settings 9600/8/N/1
2. Power on the device and watch the screen for the line:
Hit [Enter] to boot immediately, or space bar for command prompt.
When you see that line, hit the SPACE BAR and you will receive an OK prompt.
3. At the OK prompt, you want to the system into single user mode by issuing the command
boot -s

4. The system will boot in single user mode and you will then be prompted if you want to enter the path name for shell or “recovery” for root password recovery. Since we are trying to recover the password, we will enter
5. The system will then boot and run a recovery script and place you in at the > prompt
> edit
# set system root-authentication plain-text-password
# commit
# exit
> exit
Reboot the system? [y/n] yRuckus:

StandAlone AP
Press Hard Reset hole in the back of AP for >12s
L: super
P: sp-admin

If you have a saved ZoneDirector backup or debug log, contact Ruckus Tech Support, who may be able to decipher the admin password from your files. Ruckus Technical Support will need to validate you are the legal administrator of the device before doing this.


VMWare ESXi:

-Download Live CD from
Kali Linux
or Ubuntu Desktop
1. Insert the CD or ISO
2. Boot ESXi from either CD above
In Dell is by pressing F2 on boot
If your ESXi is under VMWare Workstation, then click VM > Power > Power On to BIOS
3. if you using Ubuntu, click Try Ubuntu instead of Install Ubuntu.
If you using Kali Linux, you can see desktop right away
4. open Terminal
#mount /dev/sda5 /mnt
#cp /mnt/stage.tgz /tmp
#cd /tmp
#tar xzf state.tgz
#tar xzf local.tgz
#vi etc/shadow
this is just example
#tar czf local.tgz etc
#tar czf state.tgz local.tgz
#cp state.tgz /mnt
remove the cd
Now you can login using vSphere client as root without password