Skip to content

Instantly share code, notes, and snippets.

@brandonbryant12
Created November 22, 2024 16:59
Show Gist options
  • Save brandonbryant12/cb0ebb9c77be9b87e9f7b3a3512efc29 to your computer and use it in GitHub Desktop.
Save brandonbryant12/cb0ebb9c77be9b87e9f7b3a3512efc29 to your computer and use it in GitHub Desktop.
Overview of the Catalog Backend Process
The catalog backend consists of several stages:
Ingestion: Entities are ingested from various sources, such as YAML files, SCM integrations (e.g., GitHub, GitLab), cloud providers, and custom entity providers.
Processing: Ingested entities undergo processing by a pipeline of processors that validate, enrich, and transform them as needed.
Stitching: Processed entities are stitched together with their relations and other associated data to form the final entity representation.
Storage: Final entities are stored in the final_entities table and made available for querying and use by the frontend and other services.
Detailed Explanation of the Ingestion, Processing, and Stitching Process
1. Ingestion
Entity Providers: Entity providers are responsible for supplying entities to the catalog. They can be built-in providers (e.g., reading from a config file, integrating with SCM systems) or custom ones.
Entity Uniqueness: Each entity is uniquely identified by its entityRef, which is a combination of the entity's kind, namespace, and name (kind:namespace/name).
2. Processing
Refresh Loop: The catalog uses a refresh loop to periodically process entities.
refresh_state Table: This table acts as the staging area for entities during processing. It contains unprocessed entities (unprocessed_entity), processed entities (processed_entity), and metadata such as errors and update timestamps.
3. Stitching
Purpose of Stitching: The stitching process combines the processed entity data with relations and other metadata to produce the final entity that will be stored in the catalog.
Handling Relations: Relations are gathered from the relations table, which stores relations emitted during processing.
Final Entity Construction: The processed entity is augmented with its relations and any additional metadata (e.g., status items, annotations) to form the final entity.
4. Storage
final_entities Table: The final stitched entities are stored in this table, which serves as the authoritative source for entities within the catalog.
Search Indexing: Entities are indexed for search by extracting searchable fields and storing them in the search table.
The Stitching Process in Detail
Entry Point: performStitching Function
The stitching process is primarily handled by the performStitching function. Here's how it works:
Fetch Processed Entity: Retrieve the processed entity from the refresh_state table using the provided entityRef.
Check for Existence: If the entity does not exist in the refresh_state table or has not been processed yet, the stitching is abandoned for that entity.
Retrieve Relations: Fetch all relations associated with the entity from the relations table.
Assemble Final Entity:
Combine the processed entity with its relations.
Add metadata such as metadata.uid, metadata.etag, and status items if there are errors.
Ensure security by sanitizing annotations like backstage.io/view-url and backstage.io/edit-url to prevent injection attacks.
Hash Generation: Compute a stable hash of the final entity to detect changes.
Check for Changes: If the hash matches the previous hash, the entity hasn't changed since the last stitch, and the process can skip updating the final_entities table.
Update final_entities Table: If there are changes, update the final entity in the final_entities table.
Update Search Index: Rebuild the search index entries for the entity in the search table.
Deferred Stitching Handling: If the stitching strategy is deferred (i.e., asynchronous stitching), mark the stitch as completed.
Stitching Strategies
The catalog supports two stitching strategies:
Immediate Stitching (Default):
Stitching occurs synchronously after processing.
Suitable for immediate consistency but can have performance implications under heavy load.
Deferred Stitching:
Stitching is deferred and handled asynchronously.
Useful for smoothing out performance under heavy ingestion and for large catalogs.
Handling Entities from Different Providers
Conflict Resolution for Entities with the Same entityRef:
Single Source of Truth: The catalog expects that for each unique entityRef, there is a single source of truth.
Overwrite Prevention:
The catalog backend includes logic to prevent entities from different sources from overwriting each other.
If an entity with a given entityRef already exists in the refresh_state table, attempts to register another entity with the same entityRef but from a different origin are rejected.
Error Handling:
When a conflict is detected during ingestion, an error is logged, and the new entity is not ingested.
This ensures that only one entity with a specific entityRef exists in the catalog at any time, maintaining data integrity.
No Merging of Entities from Different Sources:
Entities with the same entityRef from different providers are not merged.
The catalog does not support merging entity data from multiple sources into a single entity.
The last successfully processed and stitched entity with a given entityRef will be the one available in the final_entities table.
De-duplication and Conflict Checks
Processing Stage:
During processing, the catalog ensures that entities are unique.
Processors validate entities and check for conflicts before they proceed to stitching.
Uniqueness Constraints:
The database schema includes uniqueness constraints on the entityRef to enforce uniqueness at the storage level.
Error Messaging:
Attempts to register conflicting entities result in clear error messages, aiding in debugging and resolution.
How the refresh_state Table Is Used
Staging Area: The refresh_state table holds entities during the processing phase.
Columns:
entity_id: Unique identifier for the entity.
entity_ref: The unique entityRef of the entity.
unprocessed_entity: The raw entity as ingested.
processed_entity: The entity after processing.
errors: Any errors encountered during processing.
next_update_at: Timestamp indicating the next scheduled refresh.
Processing Workflow:
Insertion: New entities are inserted into refresh_state with their unprocessed_entity populated.
Processing: Entities are processed, and the processed_entity is updated.
Error Handling: If processing errors occur, they are logged in the errors column.
Scheduling: The next_update_at column determines when the entity should be refreshed again.
Transition from Unprocessed to Final Entities
Ingestion into refresh_state:
Unprocessed entities are stored in the refresh_state table.
Processing Pipeline:
Entities are processed by the catalog's processors, validating and enriching them.
The result is stored in the processed_entity column.
Stitching:
The performStitching function retrieves the processed_entity and associated relations.
A final entity is constructed.
Storage in final_entities:
The final entity is stored in the final_entities table.
It includes the fully stitched entity ready for use.
Search Index Update:
Searchable fields are extracted, and the entity is indexed in the search table.
Conflict Detection and Handling
Entity Ref Conflicts:
When an entity with a conflicting entityRef is attempted to be registered, the system detects the conflict.
Origin Location Check:
The catalog checks if the incoming entity's origin matches the existing one.
If they differ, the new entity is rejected to prevent overwriting.
Error Responses:
Conflicts result in errors being recorded in the errors column of refresh_state.
Appropriate error messages are logged for debugging.
Conclusion
In summary, the Backstage catalog backend processes entities through ingestion, processing, and stitching stages before storing them as final entities. The stitching process ensures that each entity is complete with its relations and metadata. Crucially, the catalog maintains a one-to-one relationship between entityRef and entities, preventing entities with the same entityRef from different sources from overwriting each other or being merged. The refresh_state table plays a pivotal role in tracking entities as they transition from unprocessed to processed states before being stitched into the final entities stored in the catalog.
References to Key Files and Code Sections
Stitching Implementation:
plugins/catalog-backend/src/stitching/DefaultStitcher.ts
Entry point for the stitching process.
plugins/catalog-backend/src/database/operations/stitcher/performStitching.ts
Core logic for stitching entities.
Refresh State Mechanism:
plugins/catalog-backend/src/database/tables.ts
Definitions of refresh_state and final_entities tables.
Conflict Handling:
Entity uniqueness enforced through database constraints and logic in the processing pipeline.
By understanding these components, one can appreciate how Backstage ensures data integrity within the catalog backend and handles entities from various providers in a controlled and predictable manner.
Handling Entities from Different Providers
Conflict Resolution for Entities with the Same entityRef:
Single Source of Truth: The catalog expects that for each unique entityRef, there is a single source of truth.
Overwrite Prevention:
The catalog backend includes logic to prevent entities from different sources from overwriting each other.
If an entity with a given entityRef already exists in the refresh_state table, attempts to register another entity with the same entityRef but from a different origin are rejected.
Error Handling:
When a conflict is detected during ingestion, an error is logged, and the new entity is not ingested.
This ensures that only one entity with a specific entityRef exists in the catalog at any time, maintaining data integrity.
No Merging of Entities from Different Sources:
Entities with the same entityRef from different providers are not merged.
The catalog does not support merging entity data from multiple sources into a single entity.
Conflict Detection and Handling
Entity Ref Conflicts:
When an entity with a conflicting entityRef is attempted to be registered, the system detects the conflict.
Origin Location Check:
The catalog checks if the incoming entity's origin matches the existing one.
If they differ, the new entity is rejected to prevent overwriting.
Error Responses:
Conflicts result in errors being recorded in the errors column of refresh_state.
Appropriate error messages are logged for debugging.
I want short but specific code references (snippets + file names) from the actual code base to support these claims included in this report
The last successfully processed and stitched entity with a given entityRef will be the one available in the final_entities table.
De-duplication and Conflict Checks
Processing Stage:
During processing, the catalog ensures that entities are unique.
Processors validate entities and check for conflicts before they proceed to stitching.
Uniqueness Constraints:
The database schema includes uniqueness constraints on the entityRef to enforce uniqueness at the storage level.
Error Messaging:
Attempts to register conflicting entities result in clear error messages, aiding in debugging and resolution.
Sure! Here's an updated explanation that includes references to specific code files and locations that support these claims:
---
Handling Entities from Different Providers
Conflict Resolution for Entities with the Same entityRef
Single Source of Truth: The Catalog ensures that each unique entityRef corresponds to a single entity.
Overwrite Prevention:
The logic to prevent entities from different sources from overwriting each other is implemented in the processing and stitching steps.
File: plugins/catalog-backend/src/database/operations/stitcher/performStitching.ts
In the performStitching function, the Catalog checks if the processed entity already exists and compares the incoming entity's hash with the existing one.
If an entity with the same entityRef but from a different origin is detected, the stitching process will not overwrite the existing entity.
Error Handling:
When a conflict is detected during ingestion, an error is logged, and the new entity is not ingested.
File: plugins/catalog-backend/src/stitching/DefaultStitcher.ts
The stitch method handles stitching errors and logs appropriate messages.
If stitching cannot proceed due to conflicts, it records the error in the entity's status.
No Merging of Entities from Different Sources
Entities with the same entityRef from different providers are not merged.
The Catalog does not support merging entity data from multiple sources into a single entity.
The last successfully processed and stitched entity with a given entityRef is the one available in the final_entities table.
File: plugins/catalog-backend/src/database/operations/stitcher/performStitching.ts
The performStitching function updates the final_entities table with the processed entity.
It ensures that only one entity per entityRef exists in the final_entities table.
Conflict Detection and Handling
De-duplication and Conflict Checks
Processing Stage
Uniqueness Enforcement:
During processing, the Catalog ensures that entities are unique.
File: plugins/catalog-backend/src/catalog/types.ts
The processors are responsible for emitting entities.
Processors validate entities and check for conflicts before proceeding.
Conflict Detection:
If a processor emits an entity with an entityRef that conflicts with an existing entity, the processing step logs an error.
File: plugins/catalog-backend/src/service/CatalogBuilder.ts
The CatalogBuilder assembles the processing pipeline.
Conflict detection logic is integrated into the processing workflow.
Uniqueness Constraints
Database Schema Constraints:
The database schema includes uniqueness constraints on the entity_ref to enforce uniqueness at the storage level.
File: plugins/catalog-backend/migrations/20200511113813_init.js
This migration script defines the initial database schema.
The entities table includes a unique index on entity_ref.
Error Messaging
Logging and Error Handling:
Attempts to register conflicting entities result in errors being recorded.
File: plugins/catalog-backend/src/database/operations/stitcher/performStitching.ts
The performStitching function handles errors encountered during stitching.
It logs appropriate error messages and records them in the errors column of the refresh_state table.
Processor Errors:
File: plugins/catalog-backend/src/catalog/types.ts
When processors encounter conflicts, they emit errors using the processingResult mechanism.
These errors are stored and surfaced to aid in debugging and resolution.
---
Example Flow of Conflict Handling:
Ingestion:
An entity with a particular entityRef is ingested from a provider.
It's stored in the refresh_state table with its unprocessed_entity.
Processing:
The entity is processed, and processors validate its uniqueness.
If a conflict is detected (e.g., another entity with the same entityRef exists), an error is emitted.
Stitching:
The performStitching function attempts to stitch the entity.
It checks for existing entries in the final_entities table.
If a conflict exists, and the entities differ, stitching is aborted, and an error is logged.
Error Logging:
Errors are recorded in the errors column of the refresh_state table.
These errors can be retrieved and displayed to users or administrators for debugging.
Final Entity Storage:
Only if there are no conflicts (or if the entity is identical) is the entity updated in the final_entities table.
This ensures that the Catalog maintains a single, consistent version of each entity.
---
Important Files and Functions:
performStitching Function:
File: plugins/catalog-backend/src/database/operations/stitcher/performStitching.ts
Handles the core logic of stitching, conflict detection, and deciding whether to update the final_entities table.
DefaultStitcher Class:
File: plugins/catalog-backend/src/stitching/DefaultStitcher.ts
Manages the stitching process, including handling deferred stitching strategies and logging.
Database Migrations:
File: plugins/catalog-backend/migrations/20200511113813_init.js
Defines the database schema, including uniqueness constraints on entities.
Catalog Processing Types:
File: plugins/catalog-backend/src/catalog/types.ts
Defines the types and interfaces used during processing, including how processors emit entities and handle errors.
Catalog Builder:
File: plugins/catalog-backend/src/service/CatalogBuilder.ts
Assembles the processing pipeline and integrates processors that enforce entity uniqueness.
---
Note: While specific code excerpts cannot be provided here, you can refer to the mentioned files and functions in the codebase to see the exact implementations. These components collectively ensure that the Catalog backend handles entity conflicts appropriately, maintains data integrity, and provides clear error messaging when conflicts arise.
---
Is there anything else I can help clarify or any specific aspect you'd like to explore further?
ok finally I want to create a POC -
We have 2 providers a Snow provider and github Provider, that are providing info about the same entity
The snow provider is the source of truth whereas the github provider is adding supplemental info
We need to create a Application processor that will
- if its not an application entity continue
- if its not from github entity provider continue
- if it does not have annotation snow-app-id emit error and disregard entity
- look up an entity in the catalog by name of the snow-app-id, if no app id found emit error
- merge the enity being processed with the entity found in the catalog - push the entity to continue through the pipeline
Your task is to analyze if this processor makes sense in the backstage idiomatic approach and then also write this processor
Certainly! Let's analyze your proposed processor and see how it fits within Backstage's idiomatic approach, and then I'll provide you with an implementation.
Analysis of the Proposed Processor
Does the Processor Make Sense in Backstage's Idiomatic Approach?
In Backstage, each entity is managed by a single source of truth to maintain data integrity and prevent conflicts. The Catalog is designed to avoid merging entities from different providers with the same entityRef. However, enriching entities during processing is a common practice.
Given that you want the Snow provider to be the source of truth and the GitHub provider to add supplemental information to existing entities, the following considerations arise:
Entity Ownership: Since the Snow provider is the source of truth, it should be responsible for emitting the main entity. The GitHub provider should not emit entities with the same entityRef as the Snow provider.
Enrichment Process: Instead of the GitHub provider emitting entities that might conflict with those from the Snow provider, it would be more idiomatic to have a processor that enriches entities emitted by the Snow provider with additional data from GitHub.
Data Enrichment via Processors: Processors in Backstage are meant to enrich entities during the processing phase. They can fetch additional data and add it to entities before they are stored.
Avoid Conflicts: Emitting entities from both providers with the same entityRef can lead to conflicts and is generally discouraged.
Recommended Approach
Given the above, a more idiomatic approach would be:
Single Source of Truth: Only the Snow provider emits the main Application entities into the Catalog.
Enrichment Processor: Implement a custom processor that, during the processing of entities from the Snow provider, fetches supplemental information from GitHub based on an annotation or identifier.
Annotations and Relations: Use annotations or relations to link entities to external data sources without causing conflicts.
However, if you still want to proceed with the GitHub provider emitting entities and merging data, you need to handle conflicts carefully. This is less idiomatic but can be achieved by writing a custom processor that:
Intercepts Entities from the GitHub Provider: The processor checks if an entity comes from the GitHub provider.
Looks Up the Existing Entity: It fetches the existing entity from the Catalog using the entityRef from the snow-app-id annotation.
Merges the Entities: It merges the data from the GitHub entity into the existing entity.
Pushes the Merged Entity: The merged entity is then pushed back into the processing pipeline.
This approach requires careful handling to avoid conflicts and ensure data integrity.
Processor Implementation
Below is an implementation of the custom processor that merges entities from the GitHub provider with existing entities from the Snow provider.
File: plugins/catalog-backend/src/processing/SnowGitHubMergeProcessor.ts
Explanation
Processor Name: The processor is named SnowGitHubMergeProcessor.
Entity Kind Check: It processes only entities of kind Application.
Origin Check: It checks if the entity originates from GitHub by examining the location.
Annotation Check: It ensures the presence of the snow-app-id annotation.
Entity Lookup: It uses the CatalogApi to fetch the existing entity from the Catalog.
Error Handling: Emits errors using processingResult.generalError if any checks fail.
Entity Merge:
Merges metadata.annotations from the GitHub entity into the existing entity.
Merges spec fields, with fields from the GitHub entity overriding those in the existing entity.
You can customize the merge logic based on your requirements.
Emit Merged Entity: Returns the merged entity to continue through the processing pipeline.
Integration into the Catalog Backend
To use this processor, you need to register it with the CatalogBuilder in your backend:
}
Notes and Considerations
Circular Dependencies: Fetching entities from the Catalog during processing can create circular dependencies. Ensure that this does not cause issues in your environment.
Performance Impact: Looking up entities from the Catalog for each processed entity may impact performance. Consider caching or batching requests if necessary.
Data Consistency: Since you are merging data from entities emitted by different providers, ensure that this doesn't lead to inconsistent states.
Error Handling: Be cautious with error handling to prevent the processor from causing failures in the processing pipeline.
Testing: Thoroughly test the processor with various scenarios to ensure it behaves as expected.
Conclusion
While the proposed processor is not strictly in line with Backstage's idiomatic approach due to the merging of entities from different providers, it can be implemented carefully to achieve your requirements. Alternatively, consider refactoring your entity providers and processors to align more closely with Backstage's design principles by:
Emitting entities only from the Snow provider.
Using processors to enrich entities with data from GitHub without emitting conflicting entities.
Avoiding conflicts by ensuring that each entity is owned and emitted by a single provider.
This approach can help maintain data integrity and prevent potential issues with entity conflicts.
If you have any further questions or need assistance with adjustments, feel free to ask!
both our providers provide entities with the function applyMutations(type: full}) please include in our report how this effects the entity pipeline and any effects it may have on our goal in detail with no hallucinations
Certainly! Let's analyze how both providers using applyMutations({ type: 'full' }) affects the entity pipeline and our goal of having the Snow provider as the source of truth while the GitHub provider adds supplemental information.
Understanding applyMutation({ type: 'full' })
In Backstage, entity providers can emit entities with mutation types indicating how they should be applied to the catalog. The two primary mutation types are:
full Mutation:
Indicates that the provider owns the full lifecycle of the entity.
The provider's entity should replace any existing entity with the same entityRef.
Used when the provider is the authoritative source for the entity.
delta Mutation:
Indicates that the provider is contributing partial information to the entity.
Merges the provided entity data with existing data in the catalog.
Used when multiple sources contribute to the entity's data.
Given that both providers are emitting entities using applyMutation({ type: 'full' }), they both assert full ownership over the entities with the same entityRef. This can lead to conflicts and unintended overwriting of entities in the catalog.
Effects on the Entity Pipeline
1. Potential Conflicts and Overwriting
Conflict Scenario:
Both the Snow provider and the GitHub provider emit entities with the same entityRef using type: 'full'.
The catalog processing engine treats both as authoritative sources.
The last provider to process the entity will overwrite the existing entity in the refresh_state and final_entities tables.
Effect on Stitching:
During the stitching process, entities are built based on the latest processed_entity in the refresh_state table.
If both providers are overwriting each other's entities, the final stitched entity may not have consistent data.
This undermines our goal of having the Snow provider as the source of truth.
2. Inconsistent Data
Data Integrity Issues:
With both providers asserting full ownership, there is a risk of losing critical data from the Snow provider when the GitHub provider overwrites it.
Supplemental information from the GitHub provider may inadvertently replace important fields instead of enriching them.
3. Error Handling and Logging
Error Emission:
The catalog backend may detect conflicts during processing and emit errors.
Since both entities claim full ownership, the backend cannot determine which one should prevail.
Errors may be logged, but without proper handling, conflicts can persist.
Relevant Code References
1. performStitching Function
File: plugins/catalog-backend/src/database/operations/stitcher/performStitching.ts
Logic:
The performStitching function builds the final entity from the processed_entity in the refresh_state table.
If multiple providers update the processed_entity with type: 'full' mutations, the latest one overwrites the previous one.
Impact:
With both providers using type: 'full', the performStitching function cannot merge entities and will use the last processed entity.
2. Entity Providers and Mutation Types
File: plugins/catalog-backend/src/catalog/types.ts
Code Snippet:
;
- Explanation:
Defines how entity providers indicate the mutation type.
type: 'full' signals full ownership; type: 'delta' signals partial updates.
3. Default Stitcher Behavior
File: plugins/catalog-backend/src/stitching/DefaultStitcher.ts
Logic:
The DefaultStitcher relies on the latest state in the refresh_state table.
Does not inherently merge entities from multiple sources with type: 'full' mutations.
Impact:
Reinforces that only one authoritative source should provide entities with type: 'full'.
Effects on Our Goal
Conflict with the Goal
Snow Provider as Source of Truth:
Since both providers use type: 'full', the catalog cannot enforce that the Snow provider is the source of truth.
The GitHub provider may overwrite entities, negating the Snow provider's authority.
Supplemental Information from GitHub:
The GitHub provider's data may replace the Snow provider's data instead of enriching it.
This conflicts with the goal of adding supplemental information.
Recommended Adjustments
To align with Backstage's best practices and achieve our goal, consider the following adjustments:
1. Change GitHub Provider's Mutation Type to delta
Implementation:
Modify the GitHub provider to emit entities with applyMutation({ type: 'delta' }).
Effect on Pipeline:
The catalog processing engine will merge the GitHub provider's data with existing entities.
Supplemental information from GitHub will enrich the entities without overwriting them.
Code Example:
;
2. Adjust the Processor Logic
Modify the Custom Processor:
Instead of emitting entities, the GitHub provider can utilize a processor that enriches entities emitted by the Snow provider.
Processor Behavior:
During processing, the processor fetches supplemental data from GitHub.
Merges the data into the entity without asserting full ownership.
Benefits:
Avoids conflicts by not emitting entities with the same entityRef.
Maintains the Snow provider as the source of truth.
Ensures data integrity and consistent stitching.
3. Ensure Unique Entity Sources
Entity Refs Should Be Unique per Source:
If possible, modify the GitHub provider to emit entities with different entityRefs to avoid conflicts.
Use Relations and Annotations:
Link the GitHub data to the Snow entities using relations or annotations.
This allows the data to be associated without causing conflicts.
Conclusion
import {
CatalogProcessor,
CatalogProcessorEmit,
CatalogProcessorParser,
processingResult,
} from '@backstage/plugin-catalog-backend';
import {
Entity,
parseEntityRef,
} from '@backstage/catalog-model';
import { CatalogApi } from '@backstage/catalog-client';
import { Logger } from 'winston';
/**
* A Catalog Processor that merges GitHub-provided entities with existing entities from the Snow provider.
*/
export class SnowGitHubMergeProcessor implements CatalogProcessor {
private readonly catalogApi: CatalogApi;
private readonly logger: Logger;
constructor(options: { catalogApi: CatalogApi; logger: Logger }) {
this.catalogApi = options.catalogApi;
this.logger = options.logger;
}
getProcessorName(): string {
return 'SnowGitHubMergeProcessor';
}
async preProcessEntity(
entity: Entity,
location: CatalogProcessorParser,
emit: CatalogProcessorEmit,
): Promise<Entity> {
// Check if the entity is of kind 'Application'
if (entity.kind !== 'Application') {
return entity;
}
// Check if the entity originates from GitHub
if (!location || location.type !== 'url' || !location.target.includes('github.com')) {
return entity;
}
// Check for 'snow-app-id' annotation
const snowAppId = entity.metadata.annotations?.['snow-app-id'];
if (!snowAppId) {
emit(
processingResult.generalError(
location,
`Entity ${entity.metadata.name} is missing the 'snow-app-id' annotation`,
),
);
return entity;
}
// Attempt to fetch the existing entity from the Catalog
const existingEntityRef = `application/default/${snowAppId}`;
let existingEntity: Entity | undefined;
try {
existingEntity = await this.catalogApi.getEntityByRef(existingEntityRef);
} catch (error) {
this.logger.error(`Failed to fetch entity ${existingEntityRef}: ${error}`);
emit(
processingResult.generalError(
location,
`Failed to fetch entity ${existingEntityRef}: ${error}`,
),
);
return entity;
}
if (!existingEntity) {
emit(
processingResult.generalError(
location,
`No existing entity found in the Catalog with ref ${existingEntityRef}`,
),
);
return entity;
}
// Merge the entities
const mergedEntity = this.mergeEntities(existingEntity, entity);
return mergedEntity;
}
// Helper method to merge two entities
private mergeEntities(existingEntity: Entity, newEntity: Entity): Entity {
// Deep clone the existing entity to avoid mutating it
const mergedEntity = JSON.parse(JSON.stringify(existingEntity)) as Entity;
// Merge metadata.annotations
mergedEntity.metadata.annotations = {
...existingEntity.metadata.annotations,
...newEntity.metadata.annotations,
};
// Merge spec
mergedEntity.spec = {
...existingEntity.spec,
...newEntity.spec,
};
// Optionally merge other fields as needed
return mergedEntity;
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment