Let’s start by stating I’m not a SQL DBA 😉 when it comes to databases, I’m just a user who needs a database for my applications 🙂 Lately, in various environments, we’ve been creating different Ivanti Workspace Control and Automation Manager databases. We simply request an empty database from the SQL department and are assigned DBO rights. Subsequently, we establish a connection to the database from Ivanti Automation Manager (for example) and handle the initial database setup within the application itself.

After initializing the SQL database, the user who performed the initialization can start Ivanti Workspace Control, for example, without encountering any issues. Another user, who has the same permissions (DBO) on the database, is unable to launch the application. The second user is able to access the SQL Database using SQL Management Studio, where they seem to have full control over the database.

While comparing a ‘working database’ with the ‘non-working’ one, we noticed a distinction in SQL table names. In the working database, all tables were named like dbo.table1, dbo.table2, and so forth. In the non-working SQL Database, we observed that all table names began with the username of the individual who initially initialized the database. For example, within the non-functional SQL Database, we observed [Mydomain\Username].table1, etc.

After some investigation, we found out there were several reasons why this might happen. Here’s an overview of the main causes:

1. Default Schema of the User

In SQL Server, each database user can be assigned a default schema. If a user does not explicitly specify a schema when creating a table, the table will be created in their default schema. If the default schema is not set to dbo (which is the default schema for most users), the tables may end up in another schema that includes the domain and username.

Solution: Set the user’s default schema to dbo:

ALTER USER [domain\username] WITH DEFAULT_SCHEMA = dbo;

2. Explicit Schema Specification in SQL Statements

If the SQL statements being executed to create tables include the schema explicitly as domain\username, the tables will be created under that schema.

Solution: Ensure that the SQL statements use dbo or omit the schema to use the default schema:

CREATE TABLE dbo.TableName (Column1 INT, Column2 NVARCHAR(50));

3. Incorrect Connection Context

Sometimes, the connection context or the tool used to run the SQL scripts might be causing the tables to be created with the domain\username schema. This can happen if the tool implicitly uses the connected user’s name as the schema.

Solution:

  • Check the connection settings and the context in which the scripts are being executed.
  • Make sure that the connection is not enforcing a different schema context.

4. Lack of Permissions

If the user creating the tables does not have the necessary permissions on the dbo schema, SQL Server might default to using a schema that the user does have permissions on, which could be their own domain\username schema.

Solution: Grant the necessary permissions to the user on the dbo schema:

GRANT ALTER ON SCHEMA::dbo TO [domain\username];

In our case, the SQL tables were owned by the initial user rather than dbo, requiring us to revert the ownership back to dbo. We ran a SQL query, replacing ‘DOMAIN\USERNAME’ with the current table names, which automatically generated a script to transfer all tables back to dbo ownership.

SELECT 'ALTER SCHEMA dbo TRANSFER [' + s.Name + '].' + o.Name
FROM sys.Objects o
INNER JOIN sys.Schemas s on o.schema_id = s.schema_id
WHERE s.Name = 'DOMAIN\USERNAME'
And (o.Type = 'U' Or o.Type = 'P' Or o.Type = 'V')

The query doesn’t make any changes immediately; it only generates a SQL script, which can then be used to actually change the tables. The sample output of the query-generated SQL script appears as follows:

ALTER SCHEMA dbo TRANSFER [DOMAIN\USERNAME].tblAudits
ALTER SCHEMA dbo TRANSFER [DOMAIN\USERNAME].tblSettings
ALTER SCHEMA dbo TRANSFER [DOMAIN\USERNAME].tblObjects
ALTER SCHEMA dbo TRANSFER [DOMAIN\USERNAME].tblResources
ALTER SCHEMA dbo TRANSFER [DOMAIN\USERNAME].tblFiles
ALTER SCHEMA dbo TRANSFER [DOMAIN\USERNAME].tblFolders

Before proceeding with the table changes, ensure you have a valid SQL backup of the database. This step is essential to safeguard against any potential data loss or unforeseen issues.

This script contains a series of ALTER SCHEMA statements transferring ownership of each table from [DOMAIN\USERNAME] to dbo. Copy the script output generated by the query and execute it on the incorrect database. This action will transfer ownership of all tables from [DOMAIN\USERNAME] to dbo.

Now that we have ‘fixed’ the database tables, let’s ensure this doesn’t happen again by modifying the current SQL configuration to prevent new tables from being created incorrectly.

1. Set Default Schema for Users

Ensure that all new and existing database users have their default schema set to dbo. This can be done when creating the user or later if the user already exists.

Setting Default Schema When Creating a User:

CREATE USER [domain\username] FOR LOGIN [domain\username] WITH DEFAULT_SCHEMA = dbo;

Altering Default Schema for Existing Users:

ALTER USER [domain\username] WITH DEFAULT_SCHEMA = dbo;

2. Create Database Roles with Specific Permissions

Instead of assigning permissions directly to individual users, create roles with the necessary permissions and set the default schema for these roles to dbo.

Creating a Role with Default Schema:

CREATE ROLE [RoleName] AUTHORIZATION [dbo];
ALTER ROLE [RoleName] ADD MEMBER [domain\username];

Hopefully, this article prevents you from encountering the same issue we experienced 🙂

Apple OSX users sometimes experience an incorrect keyboard layout loaded within their Citrix session. As a result, special characters are often located in different places. The cause of this issue is that Apple has a different keyboard layout compared to Windows, leading to an Apple US-international keyboard being recognized as a Dutch keyboard in Windows.

How to identify your Apple keyboard layout by country or region

Some time ago, we conducted extensive research together with Citrix Support to investigate the cause of this issue and whether there are possibilities to change this behavior. Unfortunately, it has been found that this behavior cannot be changed through a central solution. This behavior can only be altered by making adjustments on a per OSX system basis. This guide provides detailed instructions on what needs to be adjusted.

Read More →

For quite some time, I’ve been using Synology PhotoStation to manage all my family photos. Since I’m not the only user—my children also use the app—I thought it might be a good idea to set some permissions on the different folders. In general, I use three different groups: full control, read-write, and read-only. This way, my kids can access all shared family photos but can’t accidentally delete them. Setting access permissions within Synology Photos is somewhat limited, so I had to reorganize my folder structure to fulfill my needs. However, in the end, it worked out very well.

Last week, I decided to move over some additional photos and reorganize parts of my original folder structure. Thinking it would be quicker than using the web GUI, I accessed my photo share through SMB. Immediately, I noticed something wasn’t right. Folders could not be renamed, data couldn’t be moved, and so on. When I looked at the Windows ACL permissions, I noticed they were different from the permissions set in Synology Photos. In my case, most permissions were inherited from the root photos folder.

Read More →

We manage a Citrix farm where users primarily launch a full desktop environment. From there, they can also connect to other applications running in Citrix Silo’s or access external Citrix farms. As an user environment manager (UEM), we utilize Ivanti Workspace Control (IWC).

When a user logs onto the primary desktop, the endpoint hostname is utilized by Ivanti Workspace Control within that session. Based on the endpoint hostname, we can set specific configurations using features like “location and devices”. In a double-hop scenario, where a user launches a Citrix published application or another Citrix desktop from within the primary session, the hostname of the primary session server is used as the hostname in the secondary session.

Read More →

Due to lifecycle management (LCM), we replaced several Citrix NetScaler appliances with new ones. Although we conducted thorough acceptance tests before putting them into production, unfortunately, we experienced an annoying issue once they were operational.

Some users complained that they saw a spinning progress bar after they successfully logged on to the Citrix NetScaler. It was only reported by a minority of users and was resolved by refreshing their web browser sessions. In the end, users stopped reporting the issue because it occurred infrequently and the solution was simple—just press F5. We initiated an investigation in the hope of completely resolving the issue.

Read More →

As a big fan of Unifi products, I manage multiple Unifi sites from a self-hosted UniFi Network Application across various locations.Sometime after migrating from my UniFi Security Gateway USG-3 to an UniFi UXG Lite and upgrading from UniFi Network Application 7.x.xx to UniFi Network Application 8.x.xx, I suddenly noticed that I could no longer modify existing firewall rules. When trying to modify a rule, I received an error message along the lines of “Unable to save rule xxx due to out of range index number 2xxx.”. I’m not sure if it was caused by switching to the UXG Lite or the software upgrade, but things changed somewhere in this process.

The only option I had was to quickly create a new one and delete the old one. Not very convenient, to say the least. Initially, I didn’t spend any more time on it since I wanted to move on and thought I would delve into it later.

Today, however, I needed to create a Firewall Rule that would be one of the first to be applied. I created a new rule in the usual way, which appeared at the bottom of the list after being created. When I tried to drag it up in the GUI, an error message appeared saying “Firewall rule reorder failed. Please ensure all firewall options have been entered correctly.”

When I took a closer look at my Firewall Rules, two things immediately stood out to me:

  • All my existing rules had an ID in the 2xxx series, while the newly created firewall rule was assigned an ID of 2xxxx.
  • All my existing rules suddenly had an extra row with a lock icon in UniFi Network Application 8.1.113.

This likely explains why I was receiving an “out of range index number” error when trying to modify existing old firewall rules. It strongly appears that UniFi has switched to a different series of index numbers for their firewall rules. Although old rules have been carried over, you can no longer modify them or give newly created firewall rules a higher priority than the old ones.

In my case, and undoubtedly for many others who have extensive firewall rules, this is quite frustrating. To make modifications, they need to be converted to a new index number, meaning you will have to create new firewall rules and then delete the old ones from the 2xxx series. Initially, I started manually replicating all the rules, but I quickly grew tired of that. There has to be a more efficient way to do this.

Although there appears to be an API for the UniFi Network Application, the documentation leaves much to be desired. I could find very little substantial documentation on it.

After some searching, I came across some API documentation on the Ubiquiti Community Wiki, which was useful. By using the URL api/s/{site}, it’s possible to interact with User-defined firewall rules via rest/firewallrule. Using the GET method, you can retrieve existing rules, and theoretically, you should be able to create some rules using the POST method. After some figuring out, I was eventually able to read all existing rules via the API.

The next step was to change the rule_index number from a 2xxx series to a 2xxxx number and then write it back using the API. To my surprise, I was able to modify the cloned firewall rule, which now had a 2xxxx ID, and successfully import the modified rule into the UniFi Network application. I ended up writing a PowerShell script that allowed me to read and modify all existing rules at once. During testing, I encountered some issues, but I was able to automate the resolution using the PowerShell script as well.

By using my script, you can convert all your old, existing Firewall Rules into new Firewall Rules in one go. I’ll share the script with you in the text below. Just make sure to create a good backup of your UniFi configuration before you run the script!!! Don’t forget to adjust the “Configuration Variables” section to suit your environment.

Backup your UniFi Network Application configuration before running the script! Better safe than sorry 😉

<#
.SYNOPSIS
This script clones firewall rules within specified index ranges and rulesets to new indices starting from a specified index.

.DESCRIPTION
The script logs into a UniFi Controller, retrieves firewall rules based on user-defined criteria, and clones them to new indices. It is configurable for different sites, ranges, and ruleset types.

.PARAMETERS
- $UnifiControllerSiteID: Site ID for which the rules are managed.
- $UnifiControllerMigrateRuleStartIndex: Start index of the rule range to clone.
- $UnifiControllerMigrateRuleEndIndex: End index of the rule range to clone.
- $UnifiControllerNewRulesStartIndex: Starting index for new cloned rules.
- $UnifiControllerRuleSet: Type of ruleset to filter (e.g., LAN_IN, LAN_OUT).

.EXAMPLE
# To execute the script, simply configure the parameters at the top of the script and run it in a PowerShell environment.

.NOTES
Ensure to test the script in a controlled environment before deploying in production.
#>

# Configuration Variables
$UnifiControllerURL = "https://xxx.xxx.xxx.xxx:8443"    #UniFi Controller IP / Hostname
$UnifiControllerUsername = "admin@unifi.local"          #UniFi Username
$UnifiControllerPassword = "password"                   #UniFi Password
$UnifiControllerSiteID = "default"                      #UniFi SiteID   
$UnifiControllerMigrateRuleStartIndex = 2000            #Start index of the rule range to clone
$UnifiControllerMigrateRuleEndIndex = 2999              #End index of the rule range to clone.
$UnifiControllerNewRulesStartIndex = 20000              #Starting index for new cloned rules.
$UnifiControllerRuleSet = 'LAN_IN'                      #Type of ruleset to filter (e.g., LAN_IN, LAN_OUT).

# Ignore SSL errors if your controller uses a self-signed certificate
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }

# Start a session and save the cookie
$session = New-Object Microsoft.PowerShell.Commands.WebRequestSession
$loginUri = "$UnifiControllerURL/api/login"
$body = @{ username = $UnifiControllerUsername; password = $UnifiControllerPassword } | ConvertTo-Json

$response = Invoke-RestMethod -Uri $loginUri -Method Post -Body $body -SessionVariable session -ContentType "application/json"

# Constructs the API endpoint URL and retrieves the firewall rules for the specified site ID using an authenticated GET request.
$devicesUri = "$UnifiControllerURL/api/s/$UnifiControllerSiteID/rest/firewallrule"
$devices = Invoke-RestMethod -Uri $devicesUri -WebSession $session -Method Get

# Retrieve firewall rules with rule_index between $UnifiControllerMigrateRuleStartIndex and $UnifiControllerMigrateRuleEndIndex and matching the ruleset
$filteredRules = $devices.data | Where-Object {
    $_.rule_index -ge $UnifiControllerMigrateRuleStartIndex -and $_.rule_index -le $UnifiControllerMigrateRuleEndIndex -and $_.ruleset -eq $UnifiControllerRuleSet
} | Sort-Object rule_index  # Ensure the rules are sorted by their indices

# Initialize the new index starting from $UnifiControllerNewRulesStartIndex
$newIndex = $UnifiControllerNewRulesStartIndex

foreach ($rule in $filteredRules) {
    # Clone the rule
    $newRule = $rule.PSObject.Copy()

    # Assign the new index and increment for the next rule
    $newRule.rule_index = $newIndex++
    
    # It's typical to unset the ID before submitting a new entry
    $newRule._id = $null

    # Convert the modified rule to a JSON payload
    $jsonBody = $newRule | ConvertTo-Json -Depth 5

    # Define the endpoint URI for creating the new rule
    $createRuleUri = "$UnifiControllerURL/api/s/$UnifiControllerSiteID/rest/firewallrule"

    # Send the POST request to create the new rule
    try {
        $newRuleResponse = Invoke-RestMethod -Uri $createRuleUri -WebSession $session -Method Post -Body $jsonBody -ContentType "application/json"
        Write-Host "New rule created successfully with rule_index" -NoNewline -ForegroundColor Green
        Write-Host " $newRule.rule_index" -ForegroundColor Cyan
    }
    catch {
        Write-Host "Failed to create new rule: $_" -ForegroundColor Red
    }
}

After running the script, all firewall rules will appear duplicated within the GUI, and as a final step, you will need to remove the old firewall rules from the 2xxx index series.

At the bottom of the firewall rules, select “Manage,” where you can then select the old firewall rules with a 2xxx number. Finally, click on “Remove” to delete them.

A “security.txt” file is a standard proposed by security researcher Ed Foudil in 2017 as a way for websites to define a security policy. It’s akin to the well-known “robots.txt” file which specifies rules for web crawlers. The security.txt file allows website owners to provide information to security researchers about how to report security vulnerabilities or concerns.

Since April 2022, Security.txt has been an Internet Engineering Task Force (IETF) informational standard

Read More →

Recently, we worked on upgrading a Citrix NetScaler VPX from version 13.0 to the latest 14.1 build. The Citrix NetScaler VPX, which had been running for quite some time, had not been upgraded because it still used features and functionalities, including Classic Policies, which essentially needed to be replaced by Advanced Policies starting from the 13.1 build.

During the preparation for the upgrade, our main focus was on the legacy configuration in the running ns.conf file that needed to be adjusted.

Citrix ADC scripts for migrating and converting Citrix ADC configuration with deprecated features https://github.com/netscaler/ADC-scripts/tree/master

By using the NSPEPI tool, you can not only check for legacy configuration but also convert it to new configurations in many cases. Always ensure that you download and use the latest version during the analysis. If you are upgrading from a version older than build 13.1, always use NSPEPI beforehand to ensure that everything continues to work as expected after the upgrade.

check_invalid_config /nsconfig/ns.conf

After replacing all legacy configurations in the ns.conf and ensuring there were no blocking issues according to the NSPEPI tool to upgrade to the latest 14.1 build, we conducted a trial upgrade migration within our acceptance environment.

After the upgrade, the Citrix NetScaler restarted smoothly, but it was no longer possible to log in using our domain accounts (LDAPS). Fortunately, logging in with the local nsroot account still worked. Once logged in, it was immediately apparent that several load-balanced VIPs were down, causing the LDAPS load balancer to be inactive. Additionally, various NetScaler features were suddenly no longer visible.

Show Unlicensed Features

The navigation suddenly included an item labeled “Show Unlicensed Features,” which we hadn’t seen before. After clicking on it, all features became visible again. However, it became immediately apparent that many things seemed to be unlicensed all of a sudden. Features that we were using prior to the upgrade to build 14.1. While browsing through the NetScaler GUI, we navigated to System > License and discovered that we were running an Express edition instead of Platinum. Consequently, many of the commonly used features were indeed unlicensed.

ADC License

Next, we examined the existing license files located in the directory /nsconfig/license. What immediately caught our attention was the date present in the license file. In our case, the expiration date was older than the Eligibility Dates required for using the Citrix NetScaler 14.1 build, which is July 12, 2023 🙁

NetScaler License File

Citrix products and their Eligibility dates https://support.citrix.com/article/CTX111618/citrix-product-customer-success-services-eligibility-dates

Since this was a Citrix NetScaler VPX with a valid software subscription, the solution was fortunately quite simple. Simply redownload your license file via the MyCitrix license portal and upload it to the Citrix NetScaler VPX. The new license file will include a new SA Date, enabling you to run build 14.1. After restarting the Citrix NetScaler, all previously licensed features reappeared.

Check your product eligibility dates before you proceed with the upgrade!

In the past, it was possible to upload your NetScaler configuration file (ns.conf) to the Citrix Insight Service, which would then conduct an automated health check of the configuration. You would receive a report detailing any potential issues, best practices not followed, and so on. This was incredibly helpful during setup. Unfortunately, Citrix discontinued this self-diagnostic service some time ago.

During E2EVC 2022 Athens, I stumbled upon the “Arrow’s NetScaler config analyser” in one of the sessions—a tool more than handy. After registration, it allows you to check your NetScaler configuration for free. However, in practice, I still regularly encounter NetScaler administrators who are unaware of its existence, so I thought I’d mention it again.

Arrow’s NetScaler config analyser https://app.xconfig.io

Although they offer more than just the free health check, in this case, I want to specifically mention the FREE “Online Config Analysis.”

Unless you choose to save your ns.config within your personal account, your ns.config is not uploaded to their website; instead, it is analyzed locally from your browser session.

For added security, it’s advisable to first mask any confidential data such as passwords, IP addresses, etc., ensuring they’re not usable.

Without registration, not all results are visible, so go ahead and register yourself.

After creating an account, you’ll have full visibility into all the issues discovered within your ns.conf. These issues are categorized into four categories:

  • Critical
  • Major
  • Medium
  • Low

If you ask me, your configuration shouldn’t contain any Critical, Major, or Medium findings! 😊

An example of a Critical finding might be:

An example of a Medium finding might be:

What’s also very handy besides the analysis of your NetScaler configuration is the easy browsing through your configuration. By selecting an item on the left side (which looks identical in structure to a NetScaler), you’ll see the corresponding lines from your configuration on the right. This makes the configuration much more readable and understandable.

The tool is constantly evolving, with new recommendations being added regularly. For a comprehensive overview of the change log, you can navigate to the “What’s new” section. If you encounter false positives or have recommendations for improvements, don’t hesitate to let them know. In my experience, they are responsive to user feedback and often address issues or implement suggestions in subsequent releases!

For managing several environments, we utilize Ivanti Automation Manager, leveraging Microsoft SQL Server as the database. According to the documentation, Ivanti Automation Manager does not support “SQL Server Always On availability groups,” and unfortunately, there is no mention of using a “SQL Server multi-subnet failover cluster.”

Supported database systems https://help.ivanti.com/res/help/en_US/IA/2024/Admin/Content/48735.htm

Within our environments, however, the use of a “SQL Server multi-subnet failover cluster” is the standard database configuration that we must use. Simply by adding the parameter “MultiSubnetFailover=True” to the database connection string, the SQL Client recognizes that it’s a MultiSubnetFailover cluster. However, since the database connection string is initiated by Ivanti Automation Manager, we don’t have the ability to add “MultiSubnetFailover=True” to it directly. This parameter will need to be included from within the Ivanti Automation Manager software.

SqlConnection.ConnectionString Property https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.connectionstring

Upon inquiry with Ivanti, it was indeed confirmed that there is no support for a “SQL Server multi-subnet failover cluster.” The request from Ivanti was to submit a Uservoice through the Ivanti Ideas Portal for this feature. We have duly submitted the request as requested. However, for unclear reasons, Ivanti has chosen not to implement this feature.

Uservoice: MultiSubnetFailover support (Microsoft OLE DB Driver for SQL Server) https://ivanti.ideas.aha.io/ideas/IA-I-44

MultiSubnetFailover Uservoice

Without the “MultiSubnetFailover=True” value in the connection string, for example, Ivanti Automation Manager may fail to start after the active SQL node is changed.

Connection error

Since we couldn’t avoid using a SQL Server multi-subnet failover cluster, we have temporarily resolved this by implementing a script. It may not be the most elegant solution, but it gets the job done!

We have created a scheduled task on all servers where the Ivanti Automation Manager Console and Ivanti Dispatchers are installed. This task runs every 5 minutes and executes a PowerShell script, which checks if the connection to the database is still possible. If not, it identifies the active SQL node and updates the hosts file accordingly, allowing the Consoles and Dispatchers to establish a connection with the database again.

<#
.SYNOPSIS
This PowerShell script updates the hosts file on a target machine with the current active SQL node IP address.
It checks if the specified target hostname is reachable. If not, it determines the active SQL node and updates the hosts file accordingly.

.DESCRIPTION
This script is designed to be run on a target machine to ensure that it always resolves a specific hostname to the active SQL node IP address.
It checks the availability of the target hostname and updates the hosts file with the IP address of the active SQL node if necessary.

.NOTES
- Script Name: Update-HostsFile.ps1
- Version: 1.0
- Authors: Rink Spies
- Date: 08-04-2024

.PARAMETER None
This script does not accept any parameters.

.EXAMPLE
.\Update-HostsFile.ps1
This command runs the script to update the hosts file with the current active SQL node IP address.

#>

# VARIABLES
$HostsFile = "$env:SystemRoot\System32\drivers\etc\hosts"
$TargetHostname = "MySqlServerName" # <<Update with SQL Server Instance name >>
$SQLNodes = @("1.2.3.4", "2.3.4.5", "3.4.5.6")  # << update with all SQL Node IP's >>
$LogFile = "C:\Windows\Temp\Update-hosts-file.log"

# FUNCTIONS

# Add-HostRecord function adds a record to the hosts file.
function Add-HostRecord {
    param(
        [string]$HostsFilePath,
        [string]$IP,
        [string]$Hostname
    )

    Add-Content -Path $HostsFilePath -Value "$IP`t`t$Hostname"
}

# Test-ActiveSQLNode function checks if a given SQL node is active.
function Test-ActiveSQLNode {
    param(
        [string]$SQLNode
    )

    return (Test-NetConnection -ComputerName $SQLNode -Port 1433 -InformationLevel Quiet -ErrorAction SilentlyContinue)
}

# Update-HostsFile function updates the hosts file with the IP address of the active SQL node.
function Update-HostsFile {
    foreach ($Node in $SQLNodes) {
        if (Test-ActiveSQLNode $Node) {
            Add-HostRecord -HostsFilePath $HostsFile -IP $Node -Hostname $TargetHostname
            return $Node
        }
    }
    return $null
}

# Log-Output function logs messages to the console and a log file.
function Log-Output {
    param(
        [string]$Message,
        [bool]$IncludeTimestamp = $true
    )

    $logEntry = if ($IncludeTimestamp) {
        "$(Get-Date -Format 'dd-MM-yyyy HH:mm:ss') $Message"
    } else {
        $Message
    }

    Write-Output $logEntry
    Add-Content -Path $LogFile -Value $logEntry
}

# SCRIPT

# Start the script
Log-Output "#############################################"
Log-Output "Starting update hosts file script."

# Check if the current IP for the target hostname is active
if (-not (Test-ActiveSQLNode $TargetHostname)) {
    Log-Output "Current IP for $TargetHostname is not active anymore."
    $activeNode = Update-HostsFile
    if ($activeNode) {
        Log-Output "Active IP $activeNode is online and configured in the hosts file."
    } else {
        Log-Output "None of the IPs are active."
    }
} else {
    Log-Output "Current IP for $TargetHostname is still active."
}

# End the script
Log-Output "Stopping update hosts file script."

As mentioned, not really the solution you’d ideally want to use, but hopefully Ivanti Automation Manager will still receive support for MultiSubnetFailover in the future.