Skip to content

Instantly share code, notes, and snippets.

@jeansymolanza
Created October 13, 2025 10:52
Show Gist options
  • Select an option

  • Save jeansymolanza/ed34708283f9f1132d017694266b9c7d to your computer and use it in GitHub Desktop.

Select an option

Save jeansymolanza/ed34708283f9f1132d017694266b9c7d to your computer and use it in GitHub Desktop.
The Global1 8.3.0 Upgrade: Transforming a 2-Year Timeline into 8 Months
In 2022, I led the Global1 8.3.0 upgrade, a mission-critical project for our Securities Lending platform processing millions of trades daily across 10,000+ counterparties. This platform serves as the backbone for Barclays' Prime Brokerage business, and historically, such upgrades took 18-24 months with a notorious reputation for causing production outages and significant business disruption. Senior management challenged us to compress this timeline dramatically while maintaining zero tolerance for business disruption.
The Challenge
The project's complexity stemmed from multiple dimensions. First, we had to reconcile Barclays' extensive custom code—developed over years to support client-specific prime brokerage requirements—with the vendor's new implementation. Second, we needed to coordinate across six distinct teams spanning three continents: our Build-The-Bank (BTB) team in London, our Run-The-Bank (RTB) support team in Glasgow, our QA team in Asia, the vendor FIS, downstream systems teams (GOS, Memphis, Magics), and upstream trading systems. Third, we had a hard prerequisite: completing the GTF (Global Trade Facility) migration to replace our legacy intercompany module, which itself involved converting 500,000+ trades across complex three-way counterparty linkages.
Strategic Approach to Stakeholder Coordination
I recognized early that delivering on an accelerated timeline required fundamentally rethinking our approach. Rather than treating this as a traditional waterfall project, I broke it into iterative phases with clear gates and stakeholder alignment points.
I established weekly technical deep-dive sessions with all stakeholders, creating a forum where concerns could be raised early and decisions made collaboratively. For the Asia-based QA team, I implemented daily 6-8 AM sync calls to maximize our overlap window and ensure testing blockers were addressed immediately. These weren't status update meetings—I structured them around specific technical challenges, bringing together the right subject matter experts to drive resolution.
To manage the vendor relationship effectively, I created a detailed tracking system documenting every issue, workaround, and vendor communication. When we discovered critical bugs in the vendor's upgrade code, I didn't just log issues—I replicated them in lower environments, provided detailed reproduction steps, and often proposed fixes myself. This approach transformed our vendor relationship from reactive support to proactive partnership, significantly accelerating issue resolution from weeks to days.
Ensuring Cross-Functional Collaboration
The most significant collaboration challenge came from coordinating with downstream systems. Our platform feeds critical data to GOS (settlements), Memphis (front-end), and Magics (trade capture), and any disruption would cascade across the entire trading ecosystem. I personally reached out to each downstream team's technical lead, conducting architecture review sessions where we walked through every interface change, data format modification, and timing adjustment.
For the Magics team in India, compatibility concerns threatened to derail our timeline. Rather than escalating this as a blocking issue, I scheduled dedicated pairing sessions where we jointly analyzed their integration code, identified the specific incompatibilities, and developed solutions that required minimal changes on their end. This collaborative approach not only solved the technical issue but built trust that paid dividends when unexpected post-upgrade challenges emerged.
Managing Scope and Quality Trade-offs
Early in the project, I faced a critical decision: should we incorporate all Barclays customizations into the vendor's upgrade binary, or separate them? The traditional approach would bundle everything together, but I recognized this would make troubleshooting nearly impossible and increase risk. I advocated strongly for a two-phase approach: first apply the vanilla vendor upgrade, then overlay our custom fixes through proper source control.
This decision initially concerned stakeholders who worried about additional complexity. I addressed this by developing comprehensive automation that made the two-phase approach actually simpler than the traditional method. I created scripts that automated the entire upgrade workflow, performed pre and post-upgrade validation checks, and generated detailed audit reports. This automation was crucial—it meant we could test the upgrade repeatedly in non-production environments without consuming days of manual effort.
Maintaining Momentum Under Pressure
The project hit a critical juncture when post-upgrade testing revealed that certain accounts were failing—an issue not present in our testing environments. This threatened a complete rollback after 16 hours of upgrade work. I had to make a high-stakes decision: halt and accept failure, or diagnose the issue under time pressure.
I chose to investigate, recognizing the team's enormous investment. Through systematic debugging, I identified a bug in the vendor's code. Rather than waiting for official vendor guidance, I replicated the issue in our development environment, developed a targeted workaround by carefully commenting out the problematic code section, and assessed the risk implications. I then coordinated with senior management, clearly articulating the risk-reward profile, and obtained approval to proceed. This decisive action saved the upgrade and reinforced our ability to handle production pressures effectively.
Delivering Results
We successfully completed the Global1 8.3.0 upgrade in 8-10 months—a dramatic improvement from the historical 18-24 month timeline. More importantly, we achieved this with minimal business disruption and zero critical outages. The automated deployment process I developed became our standard, enabling the team to perform subsequent upgrades and releases with significantly reduced manual intervention.
Senior management specifically recognized my "great professionalism and adaptability, working long hours and late nights to diagnose issues and apply innovative workarounds to get the business back on track." Beyond the immediate delivery, this project established new standards for how our team approaches complex upgrades, proving that with proper planning, stakeholder engagement, and technical innovation, even the most daunting timelines can be achieved without compromising quality.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment