Skip to content

Instantly share code, notes, and snippets.

@ThomasRohde
Last active April 30, 2025 03:47
Show Gist options
  • Save ThomasRohde/1e97046a66a1dd179c13749d1121bffc to your computer and use it in GitHub Desktop.
Save ThomasRohde/1e97046a66a1dd179c13749d1121bffc to your computer and use it in GitHub Desktop.
Tech Radar 2025
[
{
"description": "Integrate security into development from the start – automating security checks in CI/CD pipelines and “shifting left” activities like code scanning and threat modeling. This practice ensures compliance in a regulated bank environment and catches issues early. Embracing DevSecOps helps teams deliver secure software faster by combining development, operations, and security workflows. We recommend institutionalizing this approach (with training and tooling) so that every build and release includes security and compliance checks by default.",
"isNew": "FALSE",
"name": "DevSecOps & Shift-Left Security",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "Design systems around business domains with loosely coupled microservices. DDD helps a large bank break down complex domains (payments, loans, trading, etc.) into well-defined bounded contexts, making systems easier to understand and evolve. Coupled with microservices, it allows independent deployment and scaling of services. In practice, this means adopting APIs and services aligned to business capabilities and avoiding tightly coupled monoliths. This approach has proven effective in fintech and banking for managing complexity and accelerating development.",
"isNew": "FALSE",
"name": "Domain-Driven Design (DDD) & Microservice Architecture",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "Build and expose services with an API-first mindset, designing clear REST/JSON or GraphQL APIs before implementing features. This facilitates reuse and external integration. In Europe’s open banking era (PSD2 and beyond), robust APIs are critical – e.g. integrating with Nordic BankID identity or fintech partners. The bank should adopt industry-standard API specifications (OpenAPI, OAuth2/OIDC for auth) and treat APIs as first-class products. This will not only improve internal interoperability but also position the bank to collaborate in the open banking ecosystem.",
"isNew": "FALSE",
"name": "API-First Development & Open Banking Integration",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "Manage infrastructure (networks, VMs, container clusters, etc.) using code and version control. IaC tools like Terraform allow consistent provisioning of environments (on-prem or AWS/Azure) using declarative configs. Pair this with GitOps – storing environment configs in Git and automating deployments (e.g. using Argo CD or Flux) so that infra changes are auditable and rollbacks are easier. This helps a hybrid-cloud bank ensure that both cloud and on-prem infrastructure changes go through code review and automation, reducing configuration drift and manual errors. It’s an adopt practice to achieve reliable, repeatable operations.",
"isNew": "FALSE",
"name": "Infrastructure as Code (IaC) & GitOps",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "Go beyond basic monitoring – implement observability (centralized logs, metrics, and distributed tracing) and adopt Site Reliability Engineering practices. This means designing systems to emit actionable telemetry and setting up dashboards/alerts for key user journeys. SRE principles (like error budgets and blameless postmortems) improve reliability for critical banking services. We recommend adopting open standards (e.g. OpenTelemetry instrumentation) and aiming for proactive incident prevention. Embracing SRE culture will help balance fast delivery with stability, which is crucial in financial services.",
"isNew": "FALSE",
"name": "Observability & SRE Practices",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "Standardize authentication and authorization using modern identity protocols. Adopt OAuth2/OIDC for both customer-facing and internal systems, integrating with federated identity providers (e.g. Azure AD for staff SSO, BankID for customers) instead of custom auth solutions. This technique improves security and user experience by enabling single sign-on and easier integration with partner services. In practice, it means using well-vetted libraries and flows for login, and externalizing identity management to trusted providers. This is especially important in Europe with GDPR – ensuring secure, federated identity and consent management.",
"isNew": "FALSE",
"name": "Identity Federation (OAuth2/OpenID Connect)",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "Evolve data architecture by empowering domain teams to own and serve their data as “products.” A Data Mesh approach moves away from a single central data lake toward distributed data management – each business domain (e.g. retail banking, credit risk) manages its own data pipelines and shares datasets via standardized interfaces. This technique promises improved scalability and agility in analytics for large organizations. We recommend trialing Data Mesh principles on a small scale (e.g. one domain), to see if it improves data availability and quality for analysts, while maintaining governance. This fits the bank’s need for local compliance (each domain ensuring GDPR compliance of their data) and faster insights.",
"isNew": "FALSE",
"name": "Data Mesh (Decentralized Data Ownership)",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "Invest in a dedicated “platform team” that curates the development tools, environments, and standards for all engineers – treating the internal developer platform as a product. One practical aspect is deploying an internal developer portal (e.g. based on Spotify’s Backstage) to centralize documentation, service catalogs, and templates. This improves discoverability of services and best practices across teams. By trialing a portal and platform engineering approach, the bank can reduce cognitive load on developers and increase consistency. Early wins could include faster onboarding (new projects scaffolded with approved templates) and easier tracking of software ownership.",
"isNew": "FALSE",
"name": "Platform Engineering & Internal Developer Portals",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "Proactively test system resilience by simulating failures in a controlled way. Chaos Engineering (e.g. using failure injection in non-prod or with toggles in prod) helps uncover weaknesses in distributed systems – important as the bank adopts cloud and microservices. Trial running chaos experiments on critical services (such as randomly shutting down servers, injecting network latency, or simulating dependency outages) to verify that failovers and fallbacks work. The goal is to build confidence that systems can withstand infrastructure or dependency failures without customer impact. Start with staging environments and only move to production experiments once maturity grows (and always with safeguards). This technique can greatly enhance reliability culture when trialed carefully.",
"isNew": "FALSE",
"name": "Chaos Engineering",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "The rise of large language models (LLMs) and generative AI tools has opened up new ways to aid developers – from code autocompletion (e.g. GitHub Copilot) to automated code generation or testing suggestions. We recommend trialing these AI-assisted development tools to boost productivity, *with proper guardrails*. For example, developers might use an AI pair programmer to write boilerplate code or suggest test cases. However, caution is needed to avoid known pitfalls – overreliance or blind acceptance of AI output can introduce bugs or bloated code. Any usage should maintain human review and good engineering practices. With policies in place (to prevent sensitive data in prompts and ensure IP compliance), generative AI can complement the team by speeding up coding and learning.",
"isNew": "FALSE",
"name": "Generative AI-Assisted Development (with Guardrails)",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "Evolve the security model towards “never trust, always verify.” Zero Trust means that even inside the network perimeter, no service or user is inherently trusted – authentication, authorization, and encryption are enforced at every access. For a bank, this is a prudent response to advanced threats and remote work. We suggest trialing Zero Trust principles by implementing measures like micro-segmentation of networks, requiring strong identity for every API call, and continuous risk assessment for user sessions. For example, an internal app should call back-end APIs with tokens that are validated and scoped, rather than relying on a flat internal network trust. Pilot this in a segment of the environment (e.g. an internal app requiring device trust + SSO auth on each request) and expand gradually. The outcome should be improved security posture and easier compliance audits, at the cost of some initial complexity in implementation.",
"isNew": "FALSE",
"name": "Zero Trust Architecture",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "Increase usage of asynchronous, event-based communication between components. Many banking processes (payments, fraud alerts, core updates) can benefit from event-driven design where services publish events (to a Kafka topic, for instance) and others subscribe, rather than tight point-to-point integrations. Trial this architecture in areas requiring decoupling or real-time data distribution – e.g. publishing an “account balance changed” event that many systems (mobile app, notifications service, analytics) can react to. Adopting event-driven patterns can improve scalability and flexibility. The bank should develop standards for defining events (using schemas like Avro or JSON) and ensure strong governance on topics (to avoid chaos). If not already heavily used, treat this as a technique to pilot in parallel with existing synchronous designs and gradually increase as confidence grows.",
"isNew": "FALSE",
"name": "Event-Driven Architecture",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "With sustainability becoming a corporate priority in Europe, assess practices to measure and reduce the carbon footprint of software. Green software engineering involves designing applications and infrastructure for energy efficiency – e.g. optimizing code to use fewer CPU cycles, scheduling jobs in off-peak hours when electricity is greener, or leveraging cloud regions with renewable energy. While still an emerging practice, it aligns with Scandinavian values of sustainability. The bank should start tracking metrics like CPU/memory usage efficiency and consider tools to estimate carbon impact of compute. We recommend evaluating these techniques and possibly doing a pilot (such as optimizing a batch process to cut runtime) to see tangible savings. This can both reduce costs and support ESG goals.",
"isNew": "FALSE",
"name": "Green Software Engineering",
"quadrant": "Techniques",
"ring": "Assess"
},
{
"description": "As data privacy regulations (GDPR and upcoming EU laws) tighten, assess advanced techniques that allow data analytics or machine learning while protecting sensitive information. This includes methods like differential privacy (adding noise to data results to prevent leakage of individual info) and multi-party computation or federated learning (where models are trained across parties without sharing raw data). A bank could benefit from PETs in areas like fraud detection across banks or research on combined datasets without exposing customer data. While these techniques are mostly experimental, keep an eye on their maturation. They could become important to enable innovation (especially AI) in a privacy-preserving way in the near future.",
"isNew": "FALSE",
"name": "Privacy-Enhancing Technologies (PETs)",
"quadrant": "Techniques",
"ring": "Assess"
},
{
"description": "Cryptographic algorithms resilient to quantum computers are being standardized (e.g. by NIST) in anticipation of future quantum threats. It’s prudent for a bank to assess its long-term cryptography strategy. Although practical quantum computers that could break RSA/AES are not here yet, data being transmitted today could be stored and decrypted later (“harvest now, decrypt later” threat). We recommend starting to evaluate post-quantum cryptography (PQC) algorithms – for example, testing PQC libraries for secure communications or storing critical data encrypted with a hybrid classical+PQC approach. Assess vendor offerings (like TLS support for PQC cipher suites) and track regulatory guidance on this front. Being early in planning a migration to quantum-safe crypto (perhaps over the next 5-10 years) will save panic later and ensure continued safety of confidential financial data.",
"isNew": "FALSE",
"name": "Post-Quantum Cryptography Readiness",
"quadrant": "Techniques",
"ring": "Assess"
},
{
"description": "Business units increasingly use low-code platforms (like Microsoft PowerApps or Outsystems) to build small applications or workflows. While this can accelerate delivery, it comes with risk of sprawl and lack of control. The IT organization should assess how to govern and integrate these citizen development efforts. Consider establishing guidelines on data access, security, and lifecycle management for apps built with low-code tools. The technology itself can be useful for simple internal tools or prototypes, but governance is key in a bank setting (to avoid shadow IT or compliance issues). Evaluate whether to formally support a low-code platform (perhaps in a sandbox environment) and how it might fit into the overall architecture. At this stage, treat it as something to watch and possibly enable under governance, rather than widely adopt without rules.",
"isNew": "FALSE",
"name": "Low-Code/No-Code Governance",
"quadrant": "Techniques",
"ring": "Assess"
},
{
"description": "In the last decade, blockchain/DLT was hyped in banking for everything from payments to smart contracts. At this point, we suggest a skeptical but open-minded assess stance. Monitor where DLT truly adds value (for example, inter-bank settlement networks or trade finance consortia) versus where a traditional database suffices. Most internal banking operations don’t need a blockchain, but there may be niche use cases (secure document provenance, digital identity verification, etc.) worth exploring. The bank should keep a small R&D effort or participate in industry pilots to stay informed (especially if central bank digital currencies or other blockchain-based systems become relevant). However, do not invest heavily unless a clear, practical advantage emerges.",
"isNew": "FALSE",
"name": "Blockchain / Distributed Ledger Use Cases",
"quadrant": "Techniques",
"ring": "Assess"
},
{
"description": "Avoid large-scale, big bang replacement of core banking systems in one go. This traditional approach (multi-year projects aiming to rip-and-replace the core) has a high failure risk. Instead, prefer incremental modernization (e.g. the strangler fig pattern, where new microservices gradually replace pieces of the legacy). The bank should put a hold on any new initiatives that attempt massive rewrites without intermediate deliverables. Experience shows it’s safer to evolve core systems gradually, delivering value along the way, than to bet everything on a distant cut-over.",
"isNew": "FALSE",
"name": "“Big Bang” Core Rewrites",
"quadrant": "Techniques",
"ring": "Hold"
},
{
"description": "Robotic Process Automation (RPA) tools (robots scripting UI clicks) have been used to quickly automate legacy processes. While RPA has short-term value for legacy integration, treat it as a stopgap, not a permanent solution. We advise holding off on expanding RPA usage for new processes – it often leads to brittle automations that are hard to maintain. Instead, prioritize proper APIs or integrations. For existing critical RPA bots, plan to replace them with more robust solutions over time. In summary, don’t use RPA as an excuse to avoid system modernization; use it sparingly and avoid new dependency on it if a real integration approach is feasible.",
"isNew": "FALSE",
"name": "Over-Reliance on RPA for Integration",
"quadrant": "Techniques",
"ring": "Hold"
},
{
"description": "Any technique that keeps data or workflows in silos – such as heavy reliance on email or spreadsheets for critical data exchange between departments – should be put on hold (and ultimately eliminated). These manual or siloed processes hamper agility and often cause errors or inconsistencies. The bank should continue to invest in process automation and data integration across units, rather than allowing key business processes to live in unconnected Excel files or aging legacy systems that don’t share data. Holding onto such practices is counterproductive when modern, shareable systems exist.",
"isNew": "FALSE",
"name": "Siloed Data and Manual Processes",
"quadrant": "Techniques",
"ring": "Hold"
},
{
"description": "Clinging to strictly waterfall project management (long specification and development cycles with big releases) is counterproductive given the pace of change and customer expectations. While regulatory projects sometimes require careful stage gates, the organization should *not* default to waterfall for software delivery. “Hold” on pure waterfall models and instead embrace agile methodologies or a hybrid approach that allows iterative development and feedback. In practice, this means phasing out year-long development cycles with no customer feedback in between. The goal is to prevent scenarios where software is outdated by the time it launches. Favor shorter cycles, even within compliance constraints, and use continuous improvement rather than big one-time deliveries.",
"isNew": "FALSE",
"name": "Waterfall-Only Delivery Culture",
"quadrant": "Techniques",
"ring": "Hold"
},
{
"description": "Containers and Kubernetes have become the de facto standard for deploying and scaling applications. The bank should confidently adopt Kubernetes for both on-premises (e.g. OpenShift or VMware Tanzu) and cloud (AWS EKS, Azure AKS) deployments. Docker/OCI containers package applications consistently, while Kubernetes provides scheduling, self-healing, and scaling. This combination enables a hybrid cloud deployment model and portability. By now, these technologies are mature – best practices (like using Kubernetes for stateless services and stateful sets or operators for databases) should be part of the toolset. Adopting containerization and orchestration increases infrastructure efficiency and consistency across the organization.",
"isNew": "FALSE",
"name": "Container Orchestration (Kubernetes & Docker)",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "The bank should use Terraform (or similar IaC tools) as a primary way to manage infrastructure provisioning. Terraform scripts become the single source of truth for configuring cloud resources (VPCs, subnets, VMs, databases, etc.) and even on-prem infrastructure supported by plugins. This approach brings automation and version control to infrastructure changes, reducing manual work and errors. Given our multi-cloud environment, Terraform’s cloud-agnostic configurations are particularly useful. Adopt it widely – ensure operations teams and developers provisioning infrastructure use IaC pull requests rather than manual console changes. This yields more consistent environments and easier disaster recovery.",
"isNew": "FALSE",
"name": "Infrastructure-as-Code Tools (Terraform)",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "Modern continuous integration and delivery pipelines should be standard. If not already, adopt a robust pipeline tool that integrates with source control. For example, GitLab CI or GitHub Actions can automate building, testing, and deploying applications whenever code is merged. In this bank, where some projects may still use older CI servers or manual deploys, migrating to a unified pipeline platform will improve speed and reliability. Features like pipeline-as-code (YAML definitions), artifact repositories, and integration with security scans are essential. By adopting such pipelines, teams get fast feedback on changes and can deploy to production with confidence and traceability.",
"isNew": "FALSE",
"name": "CI/CD Pipeline Platforms (GitLab CI, GitHub Actions, Azure DevOps)",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "Use a proven observability toolchain to centralize logging, monitoring, and alerting. Adopt tools like Prometheus for metrics collection and alerting, Grafana for dashboards/visualization, and an ELK/EFK stack (Elasticsearch/Logstash/Kibana or OpenSearch) for log aggregation and search. These open-source tools (often bundled in cloud offerings as well) have become industry standards for keeping an eye on system health. In a banking context, having a single pane of glass for monitoring is crucial for incident response and performance tuning. The stack should be integrated with OpenTelemetry instrumentation from apps, and cover both legacy systems (if possible) and new cloud services. Adoption of this stack ensures that engineers can quickly diagnose issues across complex, distributed systems.",
"isNew": "FALSE",
"name": "Observability Stack (Prometheus, Grafana, ELK)",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "Managing APIs is critical (especially with external exposure via open banking). The bank should adopt a robust API management tool or gateway if not already in place. This could be a commercial solution like Apigee or Kong, or an open-source gateway, as long as it provides security (OAuth, rate limiting), monitoring, and developer portal features. An API gateway helps standardize how internal and external clients access services. It can abstract away the mainframe or microservice details behind consistent endpoints. We recommend using the API gateway for all external-facing APIs and gradually for internal services to enforce policies uniformly. This centralizes concerns like authentication, logging, and throttling, freeing teams to focus on core logic.",
"isNew": "FALSE",
"name": "API Management Gateway",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "As microservices proliferate, a service mesh can manage cross-service communication concerns (like traffic routing, retries, encryption) in a uniform way. Tools like Istio or Linkerd provide a layer on top of Kubernetes that handles service-to-service networking with features like mutual TLS, observability, and fine-grained routing. Service meshes have matured in recent years, but they add complexity. We recommend trialing a service mesh in a contained environment – for instance, deploy it for a set of microservices that would benefit from dynamic traffic shaping or enhanced security. Evaluate if the overhead is justified by the gain in resiliency and security (e.g. automatic encryption of all service calls, circuit breaking, etc.). In a bank, where security is paramount, a mesh’s ability to uniformly enforce mTLS and gather detailed telemetry is promising, but operational complexity needs to be understood via trials.",
"isNew": "FALSE",
"name": "Service Mesh (Istio, Linkerd)",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "To realize GitOps practices, experiment with tools like Argo CD or Flux which watch Git repositories and automatically sync changes to Kubernetes clusters. This means your deployment manifests (Helm charts, K8s YAMLs, etc.) in Git become the source of truth and any change (after PR review) is applied to the cluster. Trialing Argo CD can improve deployment consistency and rollback capabilities. In our context, this could be piloted for a few non-critical applications or in a dev environment: developers merge a change and Argo CD deploys it without manual kubectl steps. The benefits to assess are easier environment rebuilds and auditability (Git history shows what changed). If successful, this could become a standard way to manage not just apps but also infrastructure config in the bank’s Kubernetes environments.",
"isNew": "FALSE",
"name": "GitOps Deployment Tools (Argo CD, Flux)",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "Securely managing secrets (passwords, keys, certificates) is non-negotiable in financial systems. A dedicated secrets manager like Vault can provide encrypted storage, access control, and audit logging for all sensitive credentials. We suggest trialing Vault (or a cloud equivalent like AWS Secrets Manager) to centralize secret management. This involves integrating applications to fetch secrets at runtime or injecting via Kubernetes secrets populated by Vault. The goal is to eliminate hard-coded or manually distributed credentials. A trial could start with a specific use-case: e.g. manage database credentials with dynamic rotation. Over time, expanding this ensures that even across multi-cloud and on-prem, secrets are handled consistently and safely. Evaluate ease of integration and performance, and educate teams on using the API/agents to retrieve secrets.",
"isNew": "FALSE",
"name": "Secrets Management (HashiCorp Vault)",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "Following on the generative AI trend, consider trialing AI-powered coding assistants as a tool. For instance, GitHub Copilot can be integrated into developers’ IDEs to suggest code completions or even entire functions. Additionally, internal ChatGPT-based bots could be set up (trained on company documentation) to answer dev questions. The aim is to boost productivity and knowledge sharing. In trials, measure whether these tools reduce the time for routine coding tasks or help resolve queries faster. It’s important to have policies: e.g. avoid pasting proprietary code into external AI prompts. Perhaps use self-hosted models if needed for confidentiality. A cautious pilot with volunteer teams and close monitoring of output quality is recommended.",
"isNew": "FALSE",
"name": "AI Code Assistants (Copilot, ChatGPT Integration)",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "Strengthen the DevSecOps toolchain by experimenting with automated security testing tools and policy-as-code. For example, SAST/DAST tools (like Snyk, Checkmarx, OWASP ZAP) can be integrated into CI pipelines to catch vulnerabilities in code or running services. In parallel, policy-as-code frameworks like Open Policy Agent (OPA) or Kyverno can enforce security rules (e.g. no container runs as root, or all dependencies must be from approved repositories) in a codified manner. Trial these tools in the CI/CD process and Kubernetes clusters. For instance, use OPA Gatekeeper on a test cluster to reject deployments that don’t meet security criteria, or run Snyk on a few projects to see the findings. This will highlight potential security gaps and help build automation to prevent them. The goal is that after trials, the bank can adopt a set of tools that consistently enforce security and compliance standards across the pipeline, which is critical in our regulated context.",
"isNew": "FALSE",
"name": "Security Scanning & Policy-as-Code Tools",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "Evaluate platforms like Microsoft Power Platform (PowerApps/Power Automate) or Outsystems as tools for rapid application development by non-engineers or for simple forms/workflows. They can potentially offload simple app creation from IT, but come with challenges (governance, integration limits). The bank should assess how these tools might fit in – perhaps for departmental apps or prototyping – and what governance model would be required. Assess technical capabilities (can they connect to our databases securely? do they meet compliance logging needs?). At this stage, gather feedback from any business users already using them and perhaps do a proof-of-concept to understand limitations, but do not roll out widely without a clear policy and IT oversight plan.",
"isNew": "FALSE",
"name": "Low-Code Development Platforms",
"quadrant": "Tools",
"ring": "Assess"
},
{
"description": "Our mainframe is critical, but historically isolated from modern toolchains. Investigate tools that bridge mainframe development with contemporary workflows. Zowe, an open-source framework from the Open Mainframe Project, offers CLI and API access to z/OS, enabling integration of mainframe assets into Git-based workflows. Similarly, IBM’s z/OS Connect provides REST/JSON APIs to leverage mainframe programs. We recommend assessing these tools to improve mainframe agility – e.g. using Zowe to script mainframe deployments or to include COBOL code in the same CI pipeline as other apps. This would help treat mainframe “as just another platform” in our DevOps ecosystem. Start by exploring feasibility (do our mainframe engineers find it useful? Does it maintain security?) and possibly trial on a non-critical application. The aim is to reduce the silo around mainframe development and enable modern automation and API exposure for legacy functions.",
"isNew": "FALSE",
"name": "Mainframe DevOps Tools (Zowe, z/OS Connect)",
"quadrant": "Tools",
"ring": "Assess"
},
{
"description": "Monitor emerging AIOps tools that apply machine learning to operations data (logs, metrics) to automatically detect anomalies or pinpoint root causes. For example, tools might learn baseline performance and alert when something is off, or correlate events across systems faster than a human. In a complex bank IT landscape, this could be valuable to cut through alert noise and identify issues (like a memory leak or an unusual surge in transactions) quickly. Assess products or cloud features that offer this (Splunk’s AI ops, Dynatrace, Datadog AI analytics, etc.). The bank can start by feeding historical incident data to such a tool in a test to see if it would have caught the problems or helped triage. This area is still maturing, so an assess phase is appropriate – we shouldn’t rely on it yet, but keep track as it could significantly enhance our incident response and system tuning in the future.",
"isNew": "FALSE",
"name": "AIOps & Automated Observability",
"quadrant": "Tools",
"ring": "Assess"
},
{
"description": "Evaluate the use of cloud-hosted development environments to standardize and speed up developer onboarding. Services like GitHub Codespaces or open-source Gitpod allow developers to spin up pre-configured dev environments in the cloud, accessible via browser or VS Code, without having to install all dependencies locally. In an enterprise setting, this can ensure everyone uses a consistent environment and can dramatically cut the “works on my machine” problems. We suggest assessing this by having a few developers use cloud dev environments for a sprint. Check if it improves setup time and if performance is acceptable. Pay attention to integration with on-prem resources (perhaps needed via VPN) and cost management. If viable, this tool could be adopted to improve productivity, especially for new hires or when switching between many projects – a developer can launch an environment for microservice X on demand. It also enhances security (code stays in cloud, not on laptops) which might align with the bank’s security posture.",
"isNew": "FALSE",
"name": "Cloud Development Environments (Codespaces/Gitpod)",
"quadrant": "Tools",
"ring": "Assess"
},
{
"description": "Discontinue use of outdated tools like older Jenkins setups that rely on fragile scripts, or version control systems like SVN/CVS. Maintaining these legacy CI servers (for example, a Jenkins with a myriad of custom plugins and no pipeline-as-code) is not ideal when modern alternatives exist. Migrate projects to pipelines defined in code (Jenkinsfile or, better, to GitLab/GitHub CI). Similarly, any code still in Subversion or other old VCS should be moved to Git to align with modern workflows and branching models. Holding onto these tools increases maintenance burden and makes automation harder, so put them in hold – no new projects should start on them, and plan to phase out existing usage by migrating to supported, modern platforms.",
"isNew": "FALSE",
"name": "Legacy CI and Version Control Systems",
"quadrant": "Tools",
"ring": "Hold"
},
{
"description": "This is largely historical, but any remaining usage of deprecated front-end tech like Adobe Flash or Microsoft Silverlight (which were used in older internal UIs or dashboards) must be on hold/retired. Browsers no longer support these, and continuing to use them poses security risks. If any internal tool still depends on such technology, expedite its replacement. Also avoid plugins like Java Applets or ActiveX – all considered obsolete. The bank should ensure all user interfaces are web standards-based (HTML5/JavaScript) or modern desktop frameworks, and no new development should consider outdated client tech. Basically, treat any legacy tech that is end-of-life as hold, and allocate effort to replace them with modern equivalents for compatibility and security.",
"isNew": "FALSE",
"name": "Deprecated UI Technologies (Flash, Silverlight, etc.)",
"quadrant": "Tools",
"ring": "Hold"
},
{
"description": "While Excel is ubiquitous for analysis, the practice of building mission-critical “applications” or workflows purely in Excel or Access (with complex macros, etc.) should be discouraged (held) moving forward. Such tools lack proper versioning, security controls, and scalability, often resulting in shadow systems that are error-prone. The bank likely has some Excel macro tools in use by departments; IT should inventory these and offer better solutions (like a proper web app or at least a governed Power BI report) as alternatives. No important new process should be implemented as a tangle of spreadsheets. By putting this practice on hold, we signal that while ad-hoc analysis in Excel is fine, anything approaching an application or multi-user data store must be done with appropriate software engineering and controls.",
"isNew": "FALSE",
"name": "MS Excel/Access as “Apps”",
"quadrant": "Tools",
"ring": "Hold"
},
{
"description": "If there are any internally developed frameworks or tools from years past that no longer have active development or external support, treat them as hold for new usage. Examples might include a proprietary web framework or a custom orchestration tool that predates current standards. Relying on such tools can become a liability (limited expertise, not up to date with security). Teams should instead adopt industry-supported frameworks and gradually migrate away from internal legacy libraries. By halting new development on these, the organization can reduce technical debt. Allocate maintenance effort to keep existing systems running, but direct new projects to modern, community-backed options.",
"isNew": "FALSE",
"name": "Unmaintained “Homegrown” Frameworks",
"quadrant": "Tools",
"ring": "Hold"
},
{
"description": "Amazon Web Services is a primary cloud platform for the bank’s workloads and should be adopted widely, with proper guardrails. AWS offers scalability and rich managed services (from EC2 and S3 to RDS, Lambda, etc.) that can accelerate development if used appropriately. The bank should continue migrating suitable applications to AWS, taking advantage of its resilience and global infrastructure. However, adoption must go hand-in-hand with compliance: ensure data is stored in approved regions (e.g. EU or local AWS regions like Stockholm for data residency), use encryption (KMS) and identity controls (IAM) rigorously, and follow European banking regulator guidelines for cloud outsourcing. AWS is a mature platform now, so the focus should be on adopting cloud-native features (like managed databases, serverless functions for event-driven glue, etc.) rather than simply treating it as another data center. This will maximize the benefit while meeting security requirements.",
"isNew": "FALSE",
"name": "AWS Cloud Services (with Compliance Controls)",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "Alongside public cloud, the bank operates an on-premises private cloud to host sensitive or latency-critical workloads. We recommend continuing to adopt and standardize this private cloud platform, likely based on Kubernetes (e.g. Red Hat OpenShift or a similar distribution) and virtualization. This provides cloud-like elasticity and self-service on premises, which is important for data that cannot leave the country or for mainframe-adjacent systems. By using Kubernetes as the abstraction, development teams get a consistent deployment target (similar to AWS EKS) even when deploying to the bank’s own data centers. Adopt practices like an internal “container service” or platform-as-a-service on top of Kubernetes to simplify usage. This allows the bank to burst to AWS/Azure for peaks, but also run steady workloads in-house efficiently. Essentially, treat K8s as a ubiquitous layer across environments.",
"isNew": "FALSE",
"name": "Private Cloud on Kubernetes (OpenShift/VMware)",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "Kafka has emerged as the backbone for real-time data streaming and integration in the enterprise. The bank should adopt Kafka (or a managed version like Confluent or Amazon MSK) as its standard event streaming platform. Many peer institutions use Kafka to decouple systems and stream transactions, logs, and events in real time. Its publish-subscribe model will support the event-driven architectures we’re trialing. Adoption means expanding usage of Kafka beyond perhaps a few use cases to making it a central nervous system – for example, feeding customer transaction events from core banking to fraud detection, analytics, and mobile notifications in parallel. With adoption comes the need to strengthen the platform: ensure high availability across data centers, schema management (using something like Confluent Schema Registry), and security (TLS encryption, client authentication, etc.). Kafka should become a core enterprise platform like the ESB was in the past, but far more scalable and flexible for modern needs.",
"isNew": "FALSE",
"name": "Apache Kafka Event Streaming Platform",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "To reduce reliance on expensive proprietary databases and embrace flexibility, adopt open-source relational databases like PostgreSQL as the default for new applications. PostgreSQL has proven enterprise-ready, with features like advanced SQL, JSONB support, and strong reliability. Many banking workloads (web services, reporting, even core ledgers for smaller products) can run on Postgres or similar OSS databases (MariaDB, etc.) with proper architecture (replication, backup, tuning). The bank should create a managed service offering for PostgreSQL – either via cloud (Aurora Postgres, Azure Postgres) or on-prem (using a Kubernetes operator or VM automation) – to make it easy for teams to spin up databases. By adopting Postgres, the organization can avoid the heavy licensing of Oracle/DB2 for every new project and still get robust performance. Of course, existing critical systems on Oracle or mainframe DB2 will stay for now, but adoption means preferring OSS DB for new things and slowly migrating less critical uses off proprietary platforms to Postgres where feasible.",
"isNew": "FALSE",
"name": "PostgreSQL and Open Source Databases",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "The IBM Mainframe remains a crucial platform for core banking data. However, the strategy should be to adopt approaches that integrate the mainframe via APIs or event streams rather than direct legacy access for new applications. For example, use products like IBM z/OS Connect to expose COBOL business logic as RESTful APIs, or stream mainframe transactions to Kafka for downstream consumers. This effectively turns the mainframe into a high-performance database or service provider that modern systems can easily interact with. By adopting this integration layer, the bank can build new digital services (mobile apps, web portals) that fetch data from the mainframe without each team needing deep mainframe expertise. It also helps eventually offload workloads from the mainframe by gradually moving the perimeter outward (the strangler pattern). In summary, keep the mainframe as a platform of record, but adopt API-driven integration so it seamlessly participates in the modern architecture.",
"isNew": "FALSE",
"name": "Mainframe API Integration (via REST/ESB layers)",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "The bank already uses Azure to some extent; we recommend trialing an expansion of Azure for workloads that can benefit from its strengths. Azure provides excellent integration with Microsoft enterprise tools (Active Directory, Office 365) and has growing offerings in analytics and AI (e.g. Azure Machine Learning, Power BI integration). It could serve as a multi-cloud secondary to mitigate dependency on AWS and negotiate better resilience (or cost). Trial moving a particular workload or launching a new project on Azure – for example, a data analytics pipeline using Azure Synapse, or a .NET-based application using Azure App Service. Assess how Azure’s services compare in performance and ease, and ensure multi-cloud governance (network connectivity, IAM differences) is understood. This trial will inform a balanced multi-cloud strategy. Given regulatory emphasis on not being locked to one vendor, having Azure as a viable platform for certain use cases (or failover) is strategically sound.",
"isNew": "FALSE",
"name": "Azure Cloud Services",
"quadrant": "Platforms",
"ring": "Trial"
},
{
"description": "Snowflake has gained traction as a cloud data warehousing and analytics platform, including in financial services. We suggest trialing Snowflake for the bank’s analytical workloads that require combining data from multiple sources and scaling compute elastically. For instance, a trial could involve offloading a segment of the enterprise data warehouse or building a regulatory reporting data mart in Snowflake. Snowflake’s separation of storage and compute and near-infinite scalability could accelerate complex queries and make data sharing easier (even with external partners, given Snowflake’s data marketplace/sharing features). During the trial, evaluate performance on large data sets, the security model (it can encrypt and even do EU-only data processing), and cost governance (as usage can grow quickly if unchecked). If successful, Snowflake could become a key platform for analytics in parallel to internal solutions, especially since it’s cloud-agnostic and has strong support for semi-structured data.",
"isNew": "FALSE",
"name": "Snowflake Data Cloud",
"quadrant": "Platforms",
"ring": "Trial"
},
{
"description": "Consider piloting a graph database like Neo4j to power use cases involving complex relationships – for example, fraud detection, AML (anti-money laundering) network analysis, or customer 360 views. Traditional relational databases struggle with recursive relationship queries, whereas graph databases excel at traversing connections (like finding rings of accounts transferring money). A trial could involve taking a subset of fraud monitoring data (transactions, accounts, devices) and modeling it in a graph DB to see if it surfaces insights (e.g. detecting a ring of accounts linked by common attributes). Evaluate the ease of integration (graph DBs are a different paradigm) and if the real-time query capability on relationships provides value. While not all workloads suit a graph, this platform could complement the bank’s data arsenal for specific analytical tasks. If results are promising, graph technology might move to Adopt for those specialized domains; if not, it can remain a niche tool.",
"isNew": "FALSE",
"name": "Graph Databases (Neo4j) for Fraud & Risk",
"quadrant": "Platforms",
"ring": "Trial"
},
{
"description": "With EU concerns about data sovereignty, platforms like Gaia-X (a European federated cloud initiative) are being developed. The bank should assess these emerging “sovereign cloud” offerings which aim to ensure EU data control and compliance. While Gaia-X is not a cloud provider itself, it sets standards for interoperability and transparency among providers (including possibly European-hosted cloud services). Keep track of local cloud providers or government-endorsed platforms that might align better with regulatory expectations in the future. At this stage, this is more about monitoring the landscape and engaging in pilot programs if available. If down the line regulators favor certain certified clouds for financial data, the bank would benefit from having evaluated them early. For now, remain informed, maybe test migrating a small workload to a European partner cloud that complies with Gaia-X principles, but do not shift strategy until these platforms prove themselves stable and beneficial.",
"isNew": "FALSE",
"name": "Sovereign Cloud / Gaia-X Initiatives",
"quadrant": "Platforms",
"ring": "Assess"
},
{
"description": "New core banking platforms delivered as SaaS or cloud-native solutions (like Thought Machine Vault, Mambu, or Temenos SaaS offerings) have emerged, promising faster product innovation and lower IT overhead. For our large bank, ripping out the mainframe core is not immediately feasible, but we should assess these modern core platforms for specific use cases or peripheral systems. For example, a smaller subsidiary or a new digital-only product could trial a modern core banking SaaS to handle accounts and transactions. Key factors to evaluate: functional coverage (can it handle the complexity of our products?), integration ease (APIs to hook into our channels and data), compliance (is it approved for EU banking data?), and cost model. While we wouldn’t move the main retail/core wholesale banking system to SaaS yet, these platforms might suit greenfield initiatives or eventually serve as a gradual replacement path. Keep a close eye and possibly run a proof-of-concept on one modern core platform to measure its maturity against our needs.",
"isNew": "FALSE",
"name": "Core Banking SaaS Solutions",
"quadrant": "Platforms",
"ring": "Assess"
},
{
"description": "As data security requirements grow, assess cloud features for confidential computing, such as AWS Nitro Enclaves or Azure Confidential VMs, which keep data encrypted even during processing by using hardware-backed secure enclaves. This is particularly relevant if we consider processing highly sensitive data (like personal financial data, encryption keys, etc.) in a public cloud – enclaves ensure even cloud admins or OS-level breaches cannot access the plaintext data. The bank should evaluate if any upcoming projects could leverage this tech (for instance, secure multi-party analytics where each party’s data must remain confidential). While still an emerging platform feature, it could be a game-changer for cloud trust. Engage with a small-scale test: e.g. use an enclave to perform a cryptographic operation or data analysis on live data, and verify the development experience and performance. This will position us to use such security measures when they become more mainstream. For now, it’s an assess-and-learn item, aligning with our cautious approach to cloud security.",
"isNew": "FALSE",
"name": "Confidential Computing Enclaves",
"quadrant": "Platforms",
"ring": "Assess"
},
{
"description": "The IBM mainframe will continue to run existing mission-critical systems, but the bank should avoid putting new workloads or features directly on the mainframe platform (unless absolutely necessary). Treat expansion of the core mainframe footprint as hold – i.e. do not write new applications in COBOL/CICS on z/OS. Instead, use the mainframe in adopt mode (as a stable system of record accessed via APIs). This “hold” guidance means if a new product or service is being developed, default to cloud or distributed platforms rather than assuming it should reside on the mainframe. This helps gradually reduce dependence and high MIPS costs, and ensures new development uses modern tech where talent is more available. The mainframe should be reserved for what it does best (high-volume transaction processing for current systems) but not burdened with new functions that could live elsewhere.",
"isNew": "FALSE",
"name": "New Workloads on Mainframe",
"quadrant": "Platforms",
"ring": "Hold"
},
{
"description": "Many banks traditionally used heavy Enterprise Service Bus (ESB) or proprietary integration brokers to connect systems (e.g. older IBM WebSphere Message Broker, Oracle Fusion Middleware, etc.). These often became monoliths themselves, with centralized logic that’s hard to change quickly. We advise placing such legacy integration platforms in hold – do not increase their usage, and plan to decrement it. Modern integration patterns (microservices, lightweight messaging, or cloud integration services) are more agile. For instance, instead of building new logic in an aging ESB, implement it as a microservice that consumes from Kafka or uses an API gateway. The ESB can continue to operate for existing flows for now, especially for mainframe or batch integrations, but no new project should choose the old ESB as the solution by default. As event-driven and API-driven architectures rise, the reliance on a central ESB should shrink.",
"isNew": "FALSE",
"name": "Monolithic ESB / Legacy Integration Suites",
"quadrant": "Platforms",
"ring": "Hold"
},
{
"description": "Similar to mainframes, refrain from expanding usage of legacy commercial databases like Oracle, DB2, or Sybase for new applications. These systems are reliable, but they come with high licensing costs, and modern alternatives exist. Mark this as hold meaning: maintain and optimize existing installations (many core systems will still run on Oracle/DB2 for now), but if a new app or microservice needs a database, don’t automatically spin up a new Oracle schema – use PostgreSQL or cloud-managed databases unless there’s a compelling reason. Also, avoid creating new dependencies on lesser-used proprietary datastores that the bank has (for example, if a specific vendor’s database was used in a niche system, don’t reuse it elsewhere just out of convenience). Consolidate toward open systems. This reduces vendor lock-in and cost over time. In summary, for any new workload, treat Oracle/DB2 as options of last resort (only if performance or specific features demand it, and even then consider if PostgreSQL could suffice).",
"isNew": "FALSE",
"name": "Legacy Proprietary Databases (for New Uses)",
"quadrant": "Platforms",
"ring": "Hold"
},
{
"description": "Java remains a core backend language in enterprise development. The bank should standardize on using modern Java (Java 17 or the latest LTS, Java 21) for its reliability, performance, and huge ecosystem. Adopt frameworks like Spring Boot for building microservices and REST APIs – Spring is a de facto standard that offers productivity and integration with everything from databases to security. Modern Java brings improvements (like records, pattern matching, and the new concurrency model from Project Loom) that we should leverage. For example, Java’s virtual threads (available in Java 21) can dramatically simplify writing high-concurrency services without complex reactive libraries. We recommend teams migrate to these Java LTS versions and use Spring Boot 3+ for new services, which also enables native image compilation if needed. This will ensure our server-side development is on a solid, efficient foundation.",
"isNew": "FALSE",
"name": "Java (with Modern Frameworks like Spring Boot)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "Many internal applications, especially those integrated with Microsoft tech or running on Windows, use C# and .NET. The adopt stance is to use .NET 6/7/8 (the unified .NET Core) for any new development in this stack, rather than the old .NET Framework. Modern .NET is cross-platform (runs on Linux and containers) and high-performance, making it suitable even for Linux container deployments. C# remains a powerful language with top-notch tooling (Visual Studio). For example, internal tools or services that align with a Microsoft ecosystem (perhaps something interfacing with Office or using Windows-specific APIs) should be built on latest .NET. Also, the bank’s prior investments in F# or VB.NET (if any) should likely consolidate to C# unless niche cases. By adopting .NET Core and the latest C# features, we ensure long-term support and benefit from performance gains and community improvements.",
"isNew": "FALSE",
"name": ".NET 6+ / C# (for Windows and Cross-Platform)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "For web front-end development, the bank should continue adopting TypeScript (a typed superset of JavaScript) along with frameworks like React as the standard. Most modern web apps, including customer online banking or internal portals, benefit from TypeScript’s type safety which catches errors early, and React’s component model for building rich UIs. Adopting this stack means phasing out older approaches (like pure JavaScript without types, or legacy frameworks) in favor of TS+React for any new front-end work. React’s ecosystem (Next.js, Redux, etc.) and large talent pool make it a safe choice. We also use secure frameworks and libraries – e.g. adhering to OWASP best practices for front-end. In Scandinavia, where user experience expectations are high, this modern web tech ensures we can deliver responsive, dynamic interfaces. Adopt also module bundlers/build tools (Webpack, Vite) and testing frameworks (Jest, React Testing Library) that align with this stack for a full toolchain.",
"isNew": "FALSE",
"name": "JavaScript/TypeScript & React (Front-End)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "Python has become a staple for scripting, automation, and data analytics. The bank should fully embrace Python 3 (ensuring Python 2 is long retired) for use cases like data processing pipelines, machine learning models, and glue scripts. Adopt frameworks as needed – e.g. pandas for data manipulation, scikit-learn or PyTorch for machine learning, and FastAPI or Django for any lightweight APIs or internal tools that make sense in Python. Python’s readability and vast ecosystem (especially in finance for things like quantitative analysis or AI) make it valuable. We need to manage it properly: using virtual environments/poetry for dependency management, and addressing performance (perhaps via optimization or moving to faster languages for heavy lifting if needed). Nonetheless, as a general-purpose glue and analytics language, Python is an adopt – it’s already likely used by quants and data engineers in the bank, and we should ensure it’s supported (enterprise package repos, etc.) and integrated.",
"isNew": "FALSE",
"name": "Python 3 (Scripting and Data Science)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "The bank’s mobile apps (for customers or even internal use) should utilize the modern, native languages on each platform: Swift for iOS development and Kotlin for Android development. Both are adopted industry-wide and provide safety and developer happiness improvements over their predecessors (Objective-C for iOS, Java for Android). We recommend all new iOS features be written in Swift and adopt SwiftUI as it matures for UI development; similarly, new Android work should use Kotlin, taking advantage of its null safety and concise syntax. This ensures our mobile apps remain high-performance and maintainable. If the bank has relied on cross-platform tools or older languages, it may continue for existing apps, but the strategic direction is native with Swift/Kotlin for best user experience and access to latest platform features. Additionally, these languages align with hiring trends (most mobile developers now use them primarily).",
"isNew": "FALSE",
"name": "Swift & Kotlin (Modern Mobile Development)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "Rust is rising in popularity for systems-level programming due to its performance and memory safety guarantees. We suggest trialing Rust in areas where we currently use C/C++ or where performance and safety are paramount. For example, a pilot project could be writing a high-throughput component (maybe a custom encryption module, a network proxy, or a low-latency service in the trading domain) in Rust. The aim is to see if Rust’s benefits (no null dereferences or data races, very fast execution) justify adopting it for certain use cases. Rust’s growing ecosystem (e.g. many libraries and even some financial trading engines) and community support mean it’s no longer niche. By trialing it now, the bank can build internal expertise and potentially use Rust for performance-critical workloads that currently strain Java/C# or are written in unsafe C++. The trial should evaluate learning curve and toolchain maturity (Cargo, etc.) in our environment.",
"isNew": "FALSE",
"name": "Rust (Systems Programming)",
"quadrant": "Languages & Frameworks",
"ring": "Trial"
},
{
"description": "Go is a simple, efficient language well-suited for cloud-native microservices and infrastructure tooling. Many modern platforms (Docker, Kubernetes) are written in Go. The bank could trial Go for building lightweight microservices or utilities – for instance, an API service that doesn’t need the heavy weight of Java/Spring, or a concurrency-heavy component (Go’s goroutines and channels are great for I/O bound tasks). Go compiles to a single binary and has minimal runtime dependencies, which is an advantage for deployment. A trial might involve a team experienced in Java trying to implement a small service in Go and comparing development speed and performance. Go’s strengths are simplicity and strong standard library (especially for networking). It might fit well for internal tools, DevOps scripts (as an alternative to Python for longer-running daemons), or high-performance network services. We evaluate if adopting Go more widely will improve productivity or performance in certain domains.",
"isNew": "FALSE",
"name": "Go (Golang)",
"quadrant": "Languages & Frameworks",
"ring": "Trial"
},
{
"description": "While REST is prevalent, GraphQL has emerged as a powerful query language for APIs that allows clients to request exactly the data they need. We recommend trialing GraphQL in a suitable scenario – for example, for an internal API that aggregates data from multiple sources (say customer info from various services) into one schema. Using GraphQL could reduce over-fetching/under-fetching issues and give front-end developers more flexibility. For the trial, consider using GraphQL for an internal developer portal API or a reporting API. Assess the complexity it introduces (like needing a GraphQL server and schema management) against the benefits for the consumers. GraphQL can be especially useful for complex client applications (like a relationship manager dashboard that needs bits of data from many services). Keep in mind caching and performance considerations, but a well-defined trial can demonstrate if this technology should be part of our API toolbox. If it proves beneficial, we might adopt it for specific use cases while still using REST elsewhere as appropriate.",
"isNew": "FALSE",
"name": "GraphQL APIs",
"quadrant": "Languages & Frameworks",
"ring": "Trial"
},
{
"description": "Next.js (a React framework for server-side rendering and static site generation) and similar meta-frameworks are worth trialing for our web applications. They can improve performance (through SSR/SSG) and SEO for public-facing sites, and provide a better developer experience with built-in routing and bundling. For instance, the bank’s public website or customer portal might benefit from SSR to deliver faster first-page loads and better support indexing. A trial could involve reimplementing a small subsection of a web app or a new marketing microsite using Next.js, and measuring performance and development productivity. This will help us gauge if we should standardize on such frameworks for certain types of applications. Next.js integrates well with React (which we already adopt), so the learning curve is minimal. Also consider exploring frameworks like Svelte or Vue 3 if teams are interested, but Next.js/React is the likely best fit given our current stack. The trial outcome should inform front-end architecture choices for the next generation of our web apps.",
"isNew": "FALSE",
"name": "Next.js and Modern Web Frameworks",
"quadrant": "Languages & Frameworks",
"ring": "Trial"
},
{
"description": "Flutter is Google’s UI toolkit for building natively compiled apps for mobile, web, and desktop from a single codebase (using the Dart language). We propose a trial of Flutter for either a secondary mobile app or an internal application, to evaluate its feasibility for faster cross-platform development. For example, an internal employee app or a prototype of a simple customer app could be built in Flutter to hit both iOS and Android at once. The trial should assess UI performance, development speed, and how well Flutter can be integrated with our existing native apps or backend (Flutter has its own ecosystem and the Dart language, which is new to our environment). Many companies have had success with Flutter for certain use-cases, but it may not replace all native development. Our goal is to see if it can deliver near-native performance and polish, and if code sharing benefits outweigh any limitations. If the trial is positive, we might consider Flutter for specific applications (perhaps in markets or use-cases where we want to deliver quickly on multiple platforms), while continuing to use native Swift/Kotlin for our main banking apps.",
"isNew": "FALSE",
"name": "Flutter (Cross-Platform Mobile/UI)",
"quadrant": "Languages & Frameworks",
"ring": "Trial"
},
{
"description": "Java’s traditional weakness is startup time and memory usage, which matters for serverless or microservices in resource-constrained environments. Trial the use of GraalVM native image compilation (perhaps via frameworks like Quarkus or Spring Native) to produce ahead-of-time compiled Java binaries. This can drastically reduce startup times and memory overhead, enabling Java to be used in AWS Lambda or high-density container deployments. The bank could pick a small microservice and compile it to a native image, then compare its performance (cold start, memory) to the JVM version. Also evaluate development constraints (not all libraries are GraalVM-friendly yet). This trial will show if these frameworks are ready for prime-time in our stack. As of 2025, Quarkus and Spring Boot Native have matured, and some organizations run critical services with them. The outcome will determine if we recommend this approach for certain microservices where throughput per CPU or startup latency is critical (e.g. scaling up quickly during traffic spikes). It might also inform architecture (using languages like Go vs native Java – we can compare during trials).",
"isNew": "FALSE",
"name": "Native Java Compilation (GraalVM, Quarkus)",
"quadrant": "Languages & Frameworks",
"ring": "Trial"
},
{
"description": "WebAssembly is a binary instruction format allowing languages like C, C++, or Rust (and even others via compilation) to run in a safe sandbox, originally within browsers but now also on the server. We should assess potential uses of WASM in our context. For instance, running heavy computations in the browser via WASM for a rich client app (maybe a complex calculator in online banking), or using WASM on the server edge (Cloudflare Workers use WASM) for fast, isolated execution of code. WebAssembly has been around, but it’s gaining traction as a way to build portable modules that can run anywhere with near-native speed. Keep an eye on frameworks like WASI (WebAssembly System Interface) which enable WASM programs to run server-side with capabilities like file or network access (safely sandboxed). An example to consider: using WASM for a plugin system in our web app, where new features can be added by dropping in WASM modules instead of deploying a full update. While we don’t need to adopt it yet, engineers should experiment with small prototypes and monitor industry trends – it could “unlock new possibilities” for cross-platform development, especially in delivering high-performance features on the web securely.",
"isNew": "FALSE",
"name": "WebAssembly Modules (WASM)",
"quadrant": "Languages & Frameworks",
"ring": "Assess"
},
{
"description": "Deno is a new JavaScript/TypeScript runtime created by the original Node.js author, aiming to improve on Node by providing a secure-by-default environment and modern features (like TypeScript support out of the box). We should assess Deno as an alternative for certain scripting tasks or building REST APIs. It fixes some Node.js pain points (package management, security sandbox) and might align well with our heavy use of TypeScript. For now, Node.js is entrenched (with vast libraries in NPM), so we’re not replacing Node in our stack. But evaluating Deno through a small project (e.g. a simple internal API or script) could prepare us for a future where Deno gains traction. Pay attention to compatibility (can it reuse some Node libraries? how good is the ecosystem tooling?). This assessment will help us understand if Deno could offer benefits like easier secure sandboxing for untrusted code execution (maybe running user-uploaded code for a fintech scenario) or simply a nicer developer experience for TS-centric developers. It’s a tech to watch, not immediately adopt.",
"isNew": "FALSE",
"name": "Deno Runtime (JavaScript/TypeScript)",
"quadrant": "Languages & Frameworks",
"ring": "Assess"
},
{
"description": "While COBOL still runs vital systems on the mainframe, mark it as hold for any new development. No new module or service should be written in COBOL or languages like PL/I. The bank should invest in modernizing or encapsulating existing COBOL logic behind APIs (as mentioned in platforms) rather than expanding its usage. The talent pool for COBOL is shrinking and time-to-market is slower. Future core logic should be implemented in more contemporary languages (Java, C#, etc.) on distributed systems, even if the data ultimately resides on the mainframe. Continue to maintain and optimize existing COBOL, but do not start new projects with it. Similarly, other legacy 4GLs or mainframe scripting languages should be avoided going forward.",
"isNew": "FALSE",
"name": "COBOL & Mainframe-Only Languages (for new development)",
"quadrant": "Languages & Frameworks",
"ring": "Hold"
},
{
"description": "Visual Basic for Applications (VBA) and Microsoft Access-based applications are considered legacy and should be hold for any serious development. In the past, some desktop automation or small departmental apps might have been done in Excel/Access with VBA. These are hard to version control, test, and secure. We advise not extending those or creating new ones. If a user need is significant enough, it should be addressed with a proper application (web or otherwise) rather than an Access database or Excel macro. By holding off these, we encourage shifting to more robust solutions (e.g. a small Python or .NET app, or using approved automation platforms with oversight).",
"isNew": "FALSE",
"name": "VBA/Access Macros",
"quadrant": "Languages & Frameworks",
"ring": "Hold"
},
{
"description": "Scala, and the Akka actor framework built on it, saw use in some finance circles (for high-throughput systems) but come with complexity and recently, Akka’s open-source license changed unfavorably. We recommend a hold on adopting Scala/Akka for new projects. The Scala language, while powerful, has a steep learning curve and can lead to very idiosyncratic code. Moreover, Akka (formerly a go-to for building concurrent, distributed systems on the JVM) now requires a commercial license for newer versions, which complicates its usage. Instead, with Java gaining similar capabilities (e.g. Loom’s lightweight threads) and Kotlin offering a simpler functional style on the JVM, there are alternatives with less overhead. Existing systems in Scala/Akka can continue, but consider migrating to supported models or at least not expanding this stack. New projects that need high concurrency can consider Java/Kotlin with Loom or use proven frameworks in Java/C# or even Rust/Go if suitable, rather than starting fresh with Scala unless there’s already strong Scala expertise and a compelling reason.",
"isNew": "FALSE",
"name": "Scala & Akka (Actor Model) for New Projects",
"quadrant": "Languages & Frameworks",
"ring": "Hold"
},
{
"description": "Put a hold on using outdated front-end libraries/frameworks such as jQuery or AngularJS (Angular 1.x) in new development. These were once popular, but jQuery’s DOM manipulation approach is no longer needed with modern frameworks, and AngularJS reached end-of-life. Continuing to use them results in code that is harder to maintain and potentially insecure (since AngularJS no longer gets official updates). Any remaining uses should be planned for refactoring. For example, an old internal admin UI built with jQuery could be gradually moved to React or at least a modern Angular (the newer Angular 12+). The directive is: no new UI should start with jQuery-based patterns or an obsolete AngularJS version. Instead use our adopted stack (React/TypeScript or possibly Angular 14+ if a team is already using Angular). This will ensure longevity and better state management, and we’ll avoid accumulating technical debt on the front-end.",
"isNew": "FALSE",
"name": "Legacy Front-End Technologies (jQuery, AngularJS)",
"quadrant": "Languages & Frameworks",
"ring": "Hold"
},
{
"description": "The discipline of designing, building, and maintaining self-service tools, workflows, and infrastructure (often manifested as an Internal Developer Platform or IDP) to enable software development teams to deliver applications with speed, quality, and autonomy, while reducing their cognitive load.",
"isNew": "FALSE",
"name": "Platform Engineering",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "A cultural practice and framework that brings financial accountability to the variable spend model of cloud, enabling organizations to gain visibility, allocate costs accurately, optimize spending, and make informed trade-offs between speed, cost, and quality across cloud environments (AWS, Azure).",
"isNew": "FALSE",
"name": "FinOps",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "A set of practices aimed at securing the entire software development lifecycle, from code commit to deployment and operation, focusing on the integrity and security of components, dependencies, build processes, and artifacts. Includes vulnerability scanning (SAST, DAST, SCA), dependency health checks, SBOM generation/management, and securing CI/CD pipelines.",
"isNew": "FALSE",
"name": "Secure Software Supply Chain Practices",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "MLOps applies DevOps principles to machine learning systems, covering the entire lifecycle from data preparation and model training to deployment, monitoring, and retraining, ensuring reliability, scalability, and governance. AIOps uses AI/ML to automate and enhance IT operations tasks like anomaly detection, root cause analysis, and predictive maintenance.",
"isNew": "FALSE",
"name": "AI/ML Ops (MLOps/AIOps)",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "An approach to software development that focuses on modeling the software based on the underlying business domain. For modernization, it often involves identifying bounded contexts within legacy monoliths (like mainframes) and refactoring them into independent microservices aligned with those domains.",
"isNew": "FALSE",
"name": "Domain-Driven Design (DDD) for Modernization",
"quadrant": "Techniques",
"ring": "Assess"
},
{
"description": "The practice of intentionally injecting controlled failures (e.g., network latency, server crashes, resource exhaustion) into systems to proactively test their resilience, identify weaknesses, and build confidence in their ability to withstand turbulent conditions.",
"isNew": "FALSE",
"name": "Chaos Engineering",
"quadrant": "Techniques",
"ring": "Trial"
},
{
"description": "A management practice focused on optimizing the flow of business value from customer request to delivery, often applied to the software development lifecycle. It involves mapping the value stream, identifying bottlenecks and waste, measuring flow metrics (e.g., lead time, cycle time, throughput, flow efficiency), and continuously improving the end-to-end process.",
"isNew": "FALSE",
"name": "Value Stream Management (VSM)",
"quadrant": "Techniques",
"ring": "Assess"
},
{
"description": "A structured process integrated into the Software Development Lifecycle (SDLC), typically during the design phase, to identify potential security threats, vulnerabilities, and required mitigations for an application or system. Methodologies like STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) help categorize threats.",
"isNew": "FALSE",
"name": "Threat Modeling",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "A security model based on the principle of \"never trust, always verify.\" It assumes no implicit trust based on network location and requires strict verification for every user and device attempting to access resources. Key components include strong identity verification (MFA), least privilege access, micro-segmentation, and continuous monitoring.",
"isNew": "FALSE",
"name": "Zero Trust Architecture (ZTA)",
"quadrant": "Techniques",
"ring": "Adopt"
},
{
"description": "Tools that automate the build, test, and deployment pipeline across different environments and platforms. Jenkins is a highly extensible open-source standard; GitLab CI offers tight integration with GitLab SCM; Azure DevOps provides a comprehensive suite, especially for Azure-centric teams.",
"isNew": "FALSE",
"name": "Cross-Platform CI/CD Orchestrators (Jenkins, GitLab CI, Azure DevOps)",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "Tools that allow infrastructure (networks, VMs, load balancers, cloud services) to be defined, deployed, and managed using code. Terraform is a widely adopted open-source tool known for its multi-cloud support (AWS, Azure, GCP, etc.) and declarative approach.",
"isNew": "FALSE",
"name": "IaC for Multi-Cloud (Terraform)",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "A combination of tools used to identify security vulnerabilities in applications. Static Application Security Testing (SAST) analyzes source code, Dynamic Application Security Testing (DAST) tests running applications, and Software Composition Analysis (SCA) identifies vulnerabilities in open-source dependencies. Vendors like Veracode, Checkmarx, and Snyk offer comprehensive platforms.",
"isNew": "FALSE",
"name": "Application Security Testing (AST) Suite (SAST, DAST, SCA)",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "Platforms that ingest and analyze telemetry data (metrics, logs, traces, events) from across the IT stack (including potentially mainframes) to provide deep insights into system health, performance, and behavior. Key players include Dynatrace, Datadog, Splunk (AppDynamics), and IBM Instana.",
"isNew": "FALSE",
"name": "Observability Platforms (with Hybrid/Mainframe Integration)",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "Specialized third-party or native cloud provider tools designed to support FinOps practices by providing detailed cost visibility, allocation, optimization recommendations, budgeting, forecasting, and anomaly detection across cloud environments. Examples include Apptio Cloudability, Flexera (CloudHealth), Harness CCM, CloudZero, and native tools like AWS Cost Explorer and Azure Cost Management.",
"isNew": "FALSE",
"name": "Cloud Cost Management Tools (FinOps Platforms)",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "Tools designed to automatically generate Software Bills of Materials (SBOMs) during the build process and help manage them (store, query, monitor for new vulnerabilities). These tools typically support standard formats like CycloneDX or SPDX. Many SCA tools now include SBOM generation capabilities.",
"isNew": "FALSE",
"name": "SBOM Generation & Management Tools",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "Products that provide secure remote access to specific applications based on Zero Trust principles, verifying user identity and device posture before granting access, rather than providing broad network access like traditional VPNs. Leading vendors include Zscaler, Palo Alto Networks, and Fortinet.",
"isNew": "FALSE",
"name": "Zero Trust Network Access (ZTNA) Solutions",
"quadrant": "Tools",
"ring": "Adopt"
},
{
"description": "Integrated sets of tools aimed at applying DevOps practices (e.g., modern IDEs, Git-based SCM, automated CI/CD pipelines, automated testing) to mainframe application development (COBOL, PL/I). Vendors like Broadcom (integrating Endevor, Code4z for VS Code, Test4z) and IBM (ZDevOps suite including Wazi, ADDI, DBB, UrbanCode Deploy) offer solutions, often integrating with open standards like Git and Jenkins, and open-source frameworks like Zowe. Compuware ISPW (now BMC) also provides relevant capabilities.",
"isNew": "FALSE",
"name": "Mainframe DevOps Toolchains (Modern Integrations)",
"quadrant": "Tools",
"ring": "Assess"
},
{
"description": "Alternatives to Terraform for Infrastructure as Code. Pulumi allows defining infrastructure using general-purpose programming languages (Python, TypeScript, Go, Java,.NET). Crossplane extends the Kubernetes API to manage external cloud resources, enabling a GitOps approach centered on Kubernetes Custom Resources.",
"isNew": "FALSE",
"name": "IaC Alternatives (Pulumi, Crossplane)",
"quadrant": "Tools",
"ring": "Assess"
},
{
"description": "Software tools designed to facilitate the controlled injection of failures into systems for Chaos Engineering experiments. Options include managed platforms (Gremlin, Steadybit), open-source frameworks (Chaos Mesh, LitmusChaos), and cloud-native services (AWS Fault Injection Simulator, Azure Chaos Studio).",
"isNew": "FALSE",
"name": "Chaos Engineering Tools",
"quadrant": "Tools",
"ring": "Trial"
},
{
"description": "Software platforms designed to support VSM practices by integrating data from various development and delivery tools (e.g., ALM, CI/CD, ITSM) to visualize value streams, measure flow metrics, identify bottlenecks, and correlate delivery performance with business outcomes. Examples include Planview (Tasktop Viz), Broadcom ValueOps (Clarity, Rally, ConnectALL), ServiceNow SPM, Atlassian Jira Align (though more focused on scaled agile planning), and others.",
"isNew": "FALSE",
"name": "Value Stream Management (VSM) Platforms",
"quadrant": "Tools",
"ring": "Assess"
},
{
"description": "Cloud provider services that manage the Kubernetes control plane, simplifying the deployment, management, and scaling of containerized applications. AWS Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS) are the respective offerings from AWS and Azure.",
"isNew": "FALSE",
"name": "Managed Kubernetes (AWS EKS, Azure AKS)",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "Platforms that provide capabilities for designing, securing, publishing, monitoring, and analyzing APIs across their lifecycle. Essential for managing internal and external APIs, enforcing security policies, and enabling developer ecosystems. Key vendors include Google (Apigee), Kong, Salesforce (MuleSoft), Axway, and cloud providers (AWS API Gateway, Azure API Management).",
"isNew": "FALSE",
"name": "API Management Platforms",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "Cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. Code is typically run in stateless compute containers triggered by events, and billing is based on actual execution time and resources consumed. AWS Lambda and Azure Functions are leading FaaS (Functions as a Service) offerings.",
"isNew": "FALSE",
"name": "Serverless Compute (AWS Lambda, Azure Func)",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "A platform built by an organization's platform engineering team to provide developers with self-service capabilities for building, deploying, and managing applications. Often includes a developer portal (like Backstage) for service discovery, documentation, and accessing underlying tools/workflows. Can be built using open-source frameworks (e.g., Backstage) or based on commercial products (e.g., Harness IDP, Port, Roadie).",
"isNew": "FALSE",
"name": "Internal Developer Platforms (IDPs)",
"quadrant": "Platforms",
"ring": "Trial"
},
{
"description": "Architectural approaches and enabling platforms for managing data in complex, distributed environments. Data Fabric focuses on using metadata and automation to create a unified, integrated data layer across diverse sources. Data Mesh promotes decentralization, with domain-specific teams owning and serving their data as products, supported by a self-serve data infrastructure platform and federated governance.",
"isNew": "FALSE",
"name": "Data Mesh / Data Fabric Platforms",
"quadrant": "Platforms",
"ring": "Assess"
},
{
"description": "A feature of z/OS (since v2.4) that allows running Linux on IBM Z applications packaged as Docker containers directly within a z/OS LPAR. It provides a tailored Linux kernel and Docker engine managed by z/OS.",
"isNew": "FALSE",
"name": "Mainframe Modernization Platforms (z/OS Container Extensions - zCX)",
"quadrant": "Platforms",
"ring": "Assess"
},
{
"description": "Software distributions that allow organizations to deploy and manage Kubernetes clusters on their own infrastructure (on-premises or private cloud). Red Hat OpenShift is an enterprise-focused distribution with integrated developer and operational tools. Rancher (by SUSE) is known for managing multiple Kubernetes clusters across different infrastructures.",
"isNew": "FALSE",
"name": "On-Prem Kubernetes Distributions (e.g., OpenShift, Rancher)",
"quadrant": "Platforms",
"ring": "Assess"
},
{
"description": "Platforms designed to continuously monitor and improve the security posture of cloud environments. Cloud Security Posture Management (CSPM) focuses on identifying misconfigurations and ensuring compliance. Cloud-Native Application Protection Platforms (CNAPP) offer a broader scope, integrating CSPM with Cloud Workload Protection Platforms (CWPP - runtime security), Kubernetes Security Posture Management (KSPM), and other capabilities for end-to-end cloud-native security. Vendors include Wiz, Palo Alto Networks (Prisma Cloud), Check Point (CloudGuard), Microsoft (Defender for Cloud), Fortinet (FortiCNAPP), Sysdig, etc..",
"isNew": "FALSE",
"name": "Cloud Security Platforms (CSPM/CNAPP)",
"quadrant": "Platforms",
"ring": "Adopt"
},
{
"description": "Java remains a dominant language for enterprise backend development, especially in finance. Long-Term Support (LTS) versions provide stability and security updates. Frameworks like Spring Boot and Quarkus optimize Java for building cloud-native microservices and improving performance/startup times. Java runs efficiently on IBM Z as well.",
"isNew": "FALSE",
"name": "Java (LTS Versions - e.g., 17, 21) with Modern Frameworks (Spring Boot, Quarkus)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "A versatile, high-level language with simple syntax and an extensive ecosystem of libraries, making it extremely popular for data science, AI/ML, scripting, automation, and web/API development.",
"isNew": "FALSE",
"name": "Python (for AI/ML, Automation, APIs)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "JavaScript libraries/frameworks for building interactive, component-based user interfaces (UIs) for web applications. React (library, by Meta) is known for its flexibility, component model, and large ecosystem. Angular (framework, by Google, TypeScript-based) is more opinionated and often favored for large enterprise applications due to its structure and comprehensive features.",
"isNew": "FALSE",
"name": "Modern Frontend Frameworks (React, Angular)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "A modern, statically-typed programming language developed by JetBrains that runs on the JVM and is fully interoperable with Java. It offers features like null safety, coroutines (for asynchronous programming), data classes, and more concise syntax compared to Java.",
"isNew": "FALSE",
"name": "Kotlin (for Backend Development)",
"quadrant": "Languages & Frameworks",
"ring": "Trial"
},
{
"description": "Libraries and frameworks that simplify the development of applications interacting with event streaming platforms like Apache Kafka. Kafka Streams is a client library specifically for building stream processing applications with Kafka. Spring Cloud Stream provides abstractions over different message brokers (including Kafka and RabbitMQ), integrating well with the Spring ecosystem.",
"isNew": "FALSE",
"name": "Event-Driven Architecture Frameworks (Kafka Streams, Spring Cloud Stream)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "Libraries providing tools and abstractions for building, training, and deploying machine learning models. TensorFlow (Google) and PyTorch (Meta) are leading deep learning frameworks. Scikit-learn is a comprehensive library for traditional ML algorithms (regression, classification, clustering).",
"isNew": "FALSE",
"name": "AI/ML Frameworks (TensorFlow, PyTorch, Scikit-learn)",
"quadrant": "Languages & Frameworks",
"ring": "Adopt"
},
{
"description": "Defining the strategic approach to handling existing COBOL and PL/I codebases on the mainframe. Options range from maintaining and enhancing in place, integrating via APIs, refactoring business logic, selectively converting to modern languages (like Java or C# using automated tools), or full replacement.",
"isNew": "FALSE",
"name": "COBOL/PL/I Modernization Strategy (Language Perspective)",
"quadrant": "Languages & Frameworks",
"ring": "Assess"
},
{
"description": "Platforms that allow users (including non-professional developers or \"citizen developers\") to build applications using visual interfaces, drag-and-drop components, and pre-built templates, requiring minimal or no traditional coding.",
"isNew": "FALSE",
"name": "Low-Code/No-Code (LCNC) Platforms",
"quadrant": "Languages & Frameworks",
"ring": "Assess"
},
{
"description": "A progressive JavaScript framework known for its approachability, excellent documentation, and flexibility. It can be adopted incrementally and is often considered easier to learn initially than React or Angular.",
"isNew": "FALSE",
"name": "Modern Frontend Frameworks (Vue.js)",
"quadrant": "Languages & Frameworks",
"ring": "Assess"
},
{
"description": "A binary instruction format enabling deployment of high-performance applications (written in languages like C++, Rust, Go) on the web, running alongside JavaScript in the browser.",
"isNew": "FALSE",
"name": "WebAssembly (Wasm)",
"quadrant": "Languages & Frameworks",
"ring": "Assess"
},
{
"description": "Using Java versions no longer receiving public security patches or updates from Oracle or the OpenJDK community.",
"isNew": "FALSE",
"name": "Older/Unsupported Java Versions (e.g., Java 8 or earlier)",
"quadrant": "Languages & Frameworks",
"ring": "Hold"
},
{
"description": "Utilizing outdated frameworks like AngularJS (which is end-of-life) or relying heavily on jQuery for building complex, modern Single Page Applications (SPAs).",
"isNew": "FALSE",
"name": "Legacy Frontend Frameworks (e.g., AngularJS, jQuery for complex UIs)",
"quadrant": "Languages & Frameworks",
"ring": "Hold"
}
]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment