
Case study: support

Citrix VDA Crashes Due to Write Cache Exhaustion
Issue:
After 18+ months of stable operations, one or two Citrix XenApp (Win 2019) VDAs began crashing randomly, dropping user sessions and data. It was not possible to recover the VDA.
Root Cause:
Since initial deployment additional applications were installed in the gold image and memory usage had risen sharply (upto 85-90%) per VDA, with user sessions averaging nearly 4GB each. Citrix WEM memory optimizations had little effect and memory per VDA was increased to 48GB.
The 40GB PVS write cache drive—sized for the original workload—was now undersized. It filled rapidly due to increased pagefile size, redirected logs, Citrix WEM cache, and PVS write cache to disk growth, causing VDA instability.
Resolution:
Increasing the VDA local write cache disk size resolved the issue.
AV Quarantine Disrupts Citrix Access
Issue:
At 06:30, users across the EU were unable to log into the Citrix environment via Okta or directly via NetScaler.
Root Cause:
We were able to access the environment via VPN and discovered that the Citrix Delivery Controllers had key services down. A Microsoft Defender AV update in the early hours had falsely flagged the Citrix Broker and High Availability services as malicious and quarantined them—despite Citrix recommended AV exclusions. Services couldn’t start.
Resolution:
Services were restored from Defender quarantine. All exclusions were revalidated across Sophos AV and MS Defender. Full service was restored by 08:00. A Citrix KB confirming the false positive was shared with stakeholders.


Crypto Ransomware via MS Exchange Zero-Day
Issue:
A late-night user disconnect incident escalated to a full-blown cryptolocker outbreak via a zero-day vulnerability in MS Exchange Server. The malware bypassed McAfee AV Enterprise.
Root Cause:
Zero-day vulnerability exploited to gain initial access. McAfee AV Enterprise failed to detect the payload or report any suspcious activity. The attack spread laterally, compromising the backup server and onsite backups, infrastructure servers, and user endpoints.
Resolution:
Rapid containment was initiated in collaboration with the client’s MSP. McAfee AV scans and the ESET online tools were used for threat identification and scope analysis. ESET online was able to detect the threat while McAfee AV was not. The environment was rebuilt from the ground up: Active Directory, Citrix CVAD, NetScalers, file servers, and endpoints and data restored using MImecast and offsite backups once validated,
Post-Incident Improvements:
The client replaced McAfee AV with ESET Enterprise, Azure Site Recovery was also implemented for robust Disaster Recovery and capability going forward and hardened patching policies across Exchange, SQL, and Windows platforms.

Smart support, when and where you need it.
We solve complex EUC problems—quickly, quietly, and with care.
