Windows 7 : Windows 2003 Print Server

Windows 7 : Windows 2003 Print Server

Came across an issue today where a Windows 7 client would not print to a Windows 2003 x64 print server. The solution was a registry key to disable the use of Async. RPC for printing:

    HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows NT\Printers\
    Right-click Printers, point to New, and then click DWORD.
    Type EnabledProtocols.
    Rigkt-click EnabledProtocols.
    In the Value data box, type 6.
    Close Registry Editor.

Reboot the client then test printing again.

More info here: http://support.microsoft.com/kb/2269469

Userenv : Event 1041

Userenv : Event 1041

On a Windows 2003 server i came across the following errors in the event log:

Windows cannot query DllName registry entry for {CF7639F3-ABA2-41DB-97F2-81E2C5DBFC5D} and it will not be loaded. This is most likely caused by a faulty registration.

Windows cannot query DllName registry entry for {7B849a69-220F-451E-B3FE-2CB811AF94AE} and it will not be loaded. This is most likely caused by a faulty registration.

To resolve this issue simply delete the following registry keys:

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\GPExtensions\{7B849a69-220F-451E-B3FE-2CB811AF94AE}]

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\GPExtensions\{7B849a69-220F-451E-B3FE-2CB811AF94AE}]

This issue is caused by the uninstaller not removing all keys created during the installation of IE8.

HP Virtual Connect : Mapped or Tunnelled VLANs

HP Virtual Connect : Mapped or Tunnelled VLANs

By no means is this conclusive, but based upon experience and testing of a set of c-Class chassis using Flex-10 Ethernet Virtual Connect Modules.

Tunnelling

Advantages

No limit on VLAN numbers, no need to define individual VLANs on Virtual Connect.

Disadvantages

When tunnelling VLANs you lose the ability to be selective about the VLANs passed to an interface. This is likely to increase the number of required  uplinks to a blade chassis; Tunnelling is ALL or NOTHING.

Anything that uses the trunk will have ALL VLAN traffic passed through to the server NIC, therefore VLAN’s must be configured on the servers NIC within the OS.

Mapped

Advantages

You can take a single defined VLAN and pass it untagged to a server NIC.

You can also pass some or all VLANs to the server, you can be as specific as we like, mapped is very flexiable.

Disadvantages

Limitation on VLAN numbers… 320.

Conclusion

From reading through the HP docs I get the impression that mapped is the preffered option.I’ve configured this on 6 chassis with a variety of different servers from ESXi hosts to Ecommerce Web Servers and Clusters, mapped VLANs gives me the flexibility I need to limit the number of uplinks whilst providing as many connections from the VC modules to the blades.

Regardless of Configuration

Switches should be configured to have trunked interfaces using 802.1Q Trunk and 802.3ad LAG protocols, this will allow grouped uplink sets which are active active.

To verify Virtual Connect has formed an LACP LAG, navigate to Interconnect Bays and select the I/O bay where the uplink ports are linked. Locate the LAG ID column, and that the assigned uplink ports share the same LAG ID.

Storage Essentials : Beware ‘Get All Details’ Task

Storage Essentials : Beware ‘Get All Details’ Task

Shortly after the introduction of Storage Essentials into an environment, fibre tape backups were interrupted at around 03:00 AM every day. We had various errors reported by hosts:

[Major] From: [email protected] “SYS-LTO5_LIB_Drive_1”  Time: 23/02/2011 03:02:56
[90:51]      Tape8:0:6:0C Cannot write to device (Details unknown.)

[Major] From: [email protected] “CDC-LTO5_LIB_Drive_4”  Time: 23/02/2011 03:03:26
[90:161]     Cannot write filemark. ([5] I/O error)

[Critical] From: [email protected] “/itaedi”  Time: 23/02/2011 03:03:26
    Unexpected close reading NET message => aborting.

[Major] From: [email protected] “UX-ITPROD_OFFLINE”  Time: 23/02/2011 03:04:52
[61:3003]      Lost connection to VBDA named “/itaedi”
    on host server.domain.local.
    Ipc subsystem reports: “IPC Read Error
    System error: [10053] Software caused connection abort

[Minor] From: [email protected] “evamgmt02.domain.local [/CONFIGURATION]”  Time: 22/02/2011 22:44:13
[81:141]      \IISDatabase
    Cannot export configuration object: (Unknown internal error.) => backup incomplete.

It turned out that there was a scheduled task within Storage Essentials ‘Get All Details‘ that ran at 03:00 AM every day. This task scanned all known hosts to check for new fibre attached LUNs/devices.

After disabling this task backups were no longer interrupted. It is still possible tomanually trigger updateson a single host, this does NOT impact backups that are runnin, it only appears that backups are interrupted whilst running the scan on ALL hosts.

vSphere : Semi-automate Bulk Datastore Creation

vSphere : Semi-automate Bulk Datastore Creation

Use the following command to output new LUNs that are on the EVA8400 (these will not be set to RoundRobin):
Get-VMHost $vmhost | Get-ScsiLun -CanonicalName “naa.6001438005*” | Where {$_.MultipathPolicy -ne “RoundRobin”} | ft –autosize

Copy output to text file and then open with Excel as below, adding the CapacityGB, Name and LunID columns. Use the vCenter GUI to obtain LUN IDs that can then be translated into the correct name as per the your sites LUN/Datastore naming convention.

CanonicalName

ConsoleDeviceName

LunType

LunID

Name

CapacityMB

CapacityGB

MultipathPolicy

naa.6001438005deea3c0000900003030000

/vmfs/devices/disks/naa.6001438005deea3c0000900003030000

disk

9

EUVMCL01_8400_DS09

491520

480

MostRecentlyUsed

naa.6001438005deea3c00009000030f0000

/vmfs/devices/disks/naa.6001438005deea3c00009000030f0000

disk

13

EUVMCL01_8400_TEST_DS04

716800

700

MostRecentlyUsed

Save the new CSV file to C:\new_datastores.csv

Now create the script to perform the task, copy the text below into C:\storage_setup.ps1. Modify the text in RED to suit your environment:

$CSVFile = “C:\NewDisk.csv

$vmcluster = “vm cluster name

$vmhost = Get-VMHost “vmhost fqdn

write “Importing CSV: $($CSVFile)”

$CSV = Import-CSV $CSVFile

Foreach ($Item in $CSV){

$HostID = $Item.HostId

$LunID = $Item.LunID

$LunPath = $Item.CanonicalName

$Name = $Item.Name

Write “Creating: $($Name)…Path: $($LunPath) “

$vmhost | New-Datastore -Vmfs -Name “$($Name)” -Path “$($LunPath)” -BlockSizeMB 8

Write “Created, applying RoundRobin multipathing policy… for cluster: $($vmcluster) “

foreach ($vmhost in get-cluster $vmcluster| get-vmhost) {

write node: $($vmhost.Name)

Get-VMHost $vmhost | Get-ScsiLun -CanonicalName “$($LunPath)” | Where {$_.MultipathPolicy -ne “RoundRobin”} | Set-ScsiLun -MultipathPolicy “roundrobin”}

}

Open the vSphere PowrCLI and connect to your vCenter; connect-viserver vCentername

Now execute the new script saved above.

Finally, if you are using Enterprise Plus, use the vCenter GUI to set Storage IO Control, this is a drop down box that can be set across all hosts with one change per datastore.

C7000 Blade Chassis : Virtual Connect ‘Unknown’

C7000 Blade Chassis : Virtual Connect Ethernet ‘Unknown’

It would appear that my share of HP firmware issues was not satisfied with the recent BL465c G7 issues….

Symptoms

We have six chassis’, two are C7000 G2 and 4 are C7000 G1 – spread across two datacenters. On logging on to the Virtual Connect for one of the G2 chassis I discovered that all ethernet links, shared uplinks and VLAN’s were showing as ‘Unknown‘. All the server profiles were degraded and the Internconnect Modules wereshowing a Communication Status ‘Failed’ warning.

The chassis contains 2x  Flex10 Virtual Connect Modules, 2x 8GB/20-port FC Virtual Connect Modules. The chassis firmware was already running the latest version for both FC and Ethernet: http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=uk&prodTypeId=3709945&prodSeriesId=3794423&swItem=MTX-737a042afe074ee0a0c928e1de&prodNameId=3794431&swEnvOID=1113&swLang=8&taskId=135&mode=3

This started with a single chassis in the morning and affected all by the evening. All of the chassis are running the firmwae version above, although the G1 chassis have HP 1/10Gb VC-Enet and 4Gb FC modules.

Resolution

******** Read this section in full before following the HP suggested fix! *********

HP support advised this was a known issue as per http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c02720395

The suggested fix is to remove the DNS server IP address from the Enclosure Bay IP Addressings configuration for the Interconnect Ethernet Modules.

I implemented the suggested fix on a chassis with no production systems, after 10 minutes there had been no impact as you can see from teh OA logs:

  Mar  1 16:46:06  OA: admin logged into the Onboard Administrator from 10.144.4.4
  Mar  1 16:51:55  OA: EBIPA Interconnect first DNS IP for bay 1 set to  by user admin
  Mar  1 16:51:55  OA: EBIPA Interconnect second DNS IP for bay 1 set to  by user admin
  Mar  1 16:51:55  OA: EBIPA Interconnect first DNS IP for bay 2 set to  by user admin
  Mar  1 16:51:55  OA: EBIPA Interconnect second DNS IP for bay 2 set to  by user admin

This was then implemnted on a further chassis, the remaining G2 chassis with the Flex-10/8GB FC modules:

   Mar  1 17:03:09  OA: EBIPA Interconnect first DNS IP for bay 1 set to  by user admin
   Mar  1 17:03:09  OA: EBIPA Interconnect second DNS IP for bay 1 set to  by user admin
   Mar  1 17:03:09  OA: EBIPA Interconnect first DNS IP for bay 2 set to  by user admin
   Mar  1 17:03:09  OA: EBIPA Interconnect second DNS IP for bay 2 set to  by user admin
   Mar  1 17:04:39  OA: Internal health status of interconnect in bay 2 changed to Unknown
   Mar  1 17:05:08  OA: Internal health status of interconnect in bay 2 changed to OK
   Mar  1 17:05:14  OA: Internal health status of interconnect in bay 1 changed to Unknown
   Mar  1 17:05:50  OA: Internal health status of interconnect in bay 1 changed to OK
   Mar  1 17:06:17  OA: Internal health status of interconnect in bay 1 changed to Unknown
   Mar  1 17:06:47  OA: Internal health status of interconnect in bay 1 changed to OK

As you can see the VC modules became unresponsive and reset, causing the chassis to lose all connectvity to the network.

On the chassis that did not reset I encountered a further issue today whilst checking a Shared Uplink set, the VC modules reset!

   Mar  3 09:39:21  OA: admin logged into the Onboard Administrator from 10.144.4.4
   Mar  3 09:44:00  OA: Internal health status of interconnect in bay 1 changed to Unknown
   Mar  3 09:44:40  OA: Internal health status of interconnect in bay 1 changed to OK
   Mar  3 09:45:00  OA: Internal health status of interconnect in bay 1 changed to Unknown
   Mar  3 09:45:27  OA: Internal health status of interconnect in bay 1 changed to OK
   Mar  3 09:45:34  OA: Internal health status of interconnect in bay 2 changed to Unknown
   Mar  3 09:46:13  OA: Internal health status of interconnect in bay 2 changed to OK
   Mar  3 09:46:37  OA: Internal health status of interconnect in bay 2 changed to Unknown
   Mar  3 09:47:05  OA: Internal health status of interconnect in bay 2 changed to OK

No changes were made and these modules reset….

We’re arranging a EBIPA/VC reset of the 4 remaining chassis this weekend, out of hours!

You’ve been warned!

vSphere : Converter Windows 2000 and vApp Issues

vSphere : P2V Windows 2000 and vApp Issues

In order to perform a Windows 2000 P2V you will be unable to use the most recent version of the VMWare converter. The last version to support Windows 2000 is 4.0.1 build-161434.

A big caveat with this version is that it does not support vApp’s as defined on vCenter. If you try to import to vCenter the converter will crash, the following event will be logged in the event log:

The VMware vCenter Converter Server service terminated unexpectedly.  It has done this 1 time(s).

On loading the converter application again you will receive an error:

VMware vCenter Converter Server is installed but not running. When Converter Server is not running you will not be able to connect to local server.
Do you want to start it now? 

The workaround for this issue is to connect directly to the/an ESX host instead of the vCenter.

The physical machine must be able to connect to the ESX host on ports 443 and 903 in order for the conversion to work.