diff --git a/.kiro/specs/dictionary-schema-support/design.md b/.kiro/specs/dictionary-schema-support/design.md new file mode 100644 index 0000000..fc7e51c --- /dev/null +++ b/.kiro/specs/dictionary-schema-support/design.md @@ -0,0 +1,178 @@ +# Design Document: Dictionary Schema Support + +## Overview + +This design document describes the implementation of proper dictionary type handling in the Oproto.Lambda.OpenApi source generator. The solution adds dictionary detection logic to the type analysis pipeline and generates correct OpenAPI `additionalProperties` schemas for dictionary types. + +### OpenAPI Dictionary Representation + +Per the [OpenAPI Specification](https://swagger.io/docs/specification/data-models/dictionaries/), dictionaries (maps, hashmaps, associative arrays) are represented using `type: object` with `additionalProperties` defining the value type. This is the standard pattern recognized by all major code generators including: + +- **Kiota** (Microsoft) - generates `IDictionary` in C# +- **OpenAPI Generator** - generates `Dictionary` in C# +- **NSwag** - generates `IDictionary` in C# + +OpenAPI only supports string keys for dictionaries, which aligns with JSON's object key constraints. + +## Architecture + +The implementation follows the existing partial class pattern used by the source generator, adding dictionary-specific logic to the type detection and schema creation pipeline. + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ CreateSchema (Entry Point) │ +└─────────────────────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ 1. TryCreateNullableSchema (existing) │ +│ 2. TryCreateSpecialTypeSchema (existing - Ulid) │ +│ 3. TryCreateCollectionSchema (existing - arrays/lists) │ +│ 4. TryCreateDictionarySchema (NEW) ◄─────────────────────────│ +│ 5. IsSimpleType → CreateSimpleTypeSchema (existing) │ +│ 6. CreateComplexTypeSchema (existing - fallback) │ +└─────────────────────────────────────────────────────────────────┘ +``` + +The key insight is that dictionary detection must occur: +- After nullable handling (to unwrap `Nullable`) +- After collection handling (dictionaries are not arrays) +- Before complex type handling (to prevent incorrect object schema generation) + +## Components and Interfaces + +### New Methods in OpenApiSpecGenerator_Types.cs + +```csharp +/// +/// Determines if a type is a dictionary type (Dictionary, IDictionary, IReadOnlyDictionary). +/// +/// The type to check +/// Output parameter for the dictionary's key type +/// Output parameter for the dictionary's value type +/// True if the type is a dictionary type +private bool IsDictionaryType(ITypeSymbol typeSymbol, out ITypeSymbol keyType, out ITypeSymbol valueType) +``` + +### New Methods in OpenApiSpecGenerator_Schema.cs + +```csharp +/// +/// Attempts to create a schema for dictionary types. +/// +/// The type symbol to check for dictionary +/// The member symbol for additional metadata +/// The output schema if the type is a dictionary +/// True if a dictionary schema was created, false otherwise +private bool TryCreateDictionarySchema(ITypeSymbol typeSymbol, ISymbol memberSymbol, out OpenApiSchema schema) +``` + +### Integration Point in CreateSchema + +The `CreateSchema` method in `OpenApiSpecGenerator_Schema.cs` will be modified to call `TryCreateDictionarySchema` after collection handling but before complex type handling. + +## Data Models + +### Dictionary Type Detection Patterns + +The following type patterns will be recognized as dictionaries: + +| Type Pattern | MetadataName | Detection Method | +|--------------|--------------|------------------| +| `Dictionary` | `Dictionary`2` | Direct type check | +| `IDictionary` | `IDictionary`2` | Direct type check | +| `IReadOnlyDictionary` | `IReadOnlyDictionary`2` | Direct type check | +| Custom types implementing IDictionary | N/A | Interface check | + +### Generated Schema Patterns + +| Input Type | Generated Schema | +|------------|------------------| +| `Dictionary` | `{ "type": "object", "additionalProperties": { "type": "string" } }` | +| `Dictionary` | `{ "type": "object", "additionalProperties": { "type": "integer" } }` | +| `Dictionary` | `{ "type": "object", "additionalProperties": { "$ref": "#/components/schemas/ComplexType" } }` | +| `Dictionary>` | `{ "type": "object", "additionalProperties": { "type": "array", "items": {...} } }` | +| `Dictionary>` | `{ "type": "object", "additionalProperties": { "type": "object", "additionalProperties": {...} } }` | + +## Correctness Properties + +*A property is a characteristic or behavior that should hold true across all valid executions of a system—essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.* + +### Property 1: Dictionary Type Detection + +*For any* type symbol that is `Dictionary`, `IDictionary`, `IReadOnlyDictionary`, or implements `IDictionary`, the `IsDictionaryType` method SHALL return true and correctly extract the key and value types. + +**Validates: Requirements 1.1, 1.2, 1.3, 1.4, 1.5** + +### Property 2: Dictionary Schema Structure + +*For any* dictionary type, the generated OpenAPI schema SHALL have `type: "object"` and a non-null `additionalProperties` schema (not empty `properties`). + +**Validates: Requirements 5.2** + +### Property 3: Simple Value Type Schema + +*For any* dictionary with a simple value type (string, int, bool, decimal, DateTime, etc.), the `additionalProperties` schema SHALL have the correct OpenAPI type and format matching the value type. + +**Validates: Requirements 2.1, 2.2, 2.3, 2.4, 2.5** + +### Property 4: Complex Value Type Reference + +*For any* dictionary with a complex (non-simple) value type, the `additionalProperties` schema SHALL contain a reference (`$ref`) to the value type's schema in components. + +**Validates: Requirements 3.1** + +### Property 5: Nullable Dictionary Handling + +*For any* nullable dictionary type (either `Nullable>` or dictionary property with nullable annotation), the generated schema SHALL have `nullable: true`. + +**Validates: Requirements 4.1, 4.2** + +## Error Handling + +### Invalid Dictionary Types + +- If a dictionary type has non-string keys, the generator will still produce an `additionalProperties` schema (OpenAPI only supports string keys for objects) +- If the value type cannot be resolved, the generator falls back to `additionalProperties: { type: "object" }` + +### Circular References + +- Dictionary value types that reference the containing type are handled by the existing circular reference detection in `_processedTypes` +- Self-referential dictionaries produce schemas with `$ref` to prevent infinite recursion + +### Attribute Processing Errors + +- Invalid JSON in `[OpenApiSchema(Example = "...")]` is handled gracefully with fallback to string representation +- Missing or malformed attributes are ignored, using default schema generation + +## Testing Strategy + +### Unit Tests + +Unit tests will verify specific examples and edge cases: + +1. `Dictionary` produces correct schema +2. `Dictionary` produces correct schema with integer type +3. `Dictionary` produces schema with $ref +4. `Dictionary>` produces nested array schema +5. `Dictionary>` produces nested dictionary schema +6. Nullable dictionary produces schema with `nullable: true` +7. Dictionary with `[OpenApiSchema]` attributes applies description and example +8. Custom type implementing `IDictionary` is detected as dictionary + +### Property-Based Tests + +Property-based tests will verify universal properties using the fast-check library pattern: + +1. **Dictionary Detection Property**: For all generated dictionary type symbols, `IsDictionaryType` returns true +2. **Schema Structure Property**: For all dictionary types, generated schema has `additionalProperties` (not empty `properties`) +3. **Value Type Mapping Property**: For all dictionaries with simple value types, `additionalProperties.Type` matches the expected OpenAPI type +4. **Complex Type Reference Property**: For all dictionaries with complex value types, `additionalProperties` contains a `$ref` +5. **Nullable Property**: For all nullable dictionaries, schema has `nullable: true` + +### Test Configuration + +- Property tests will run minimum 100 iterations +- Tests will use the existing `GenerateSchemaFromSource` helper pattern from `OpenApiGeneratorTests.cs` +- Each property test will be tagged with: **Feature: dictionary-schema-support, Property N: {property_text}** + diff --git a/.kiro/specs/dictionary-schema-support/requirements.md b/.kiro/specs/dictionary-schema-support/requirements.md new file mode 100644 index 0000000..784254c --- /dev/null +++ b/.kiro/specs/dictionary-schema-support/requirements.md @@ -0,0 +1,79 @@ +# Requirements Document + +## Introduction + +This document specifies the requirements for adding proper dictionary type handling to the Oproto.Lambda.OpenApi source generator. Currently, dictionary types (`Dictionary`, `IDictionary`, etc.) are incorrectly treated as complex objects with empty properties, producing invalid OpenAPI schemas. This feature will enable the generator to produce correct `additionalProperties` schemas for dictionary types. + +## Glossary + +- **Source_Generator**: The Oproto.Lambda.OpenApi.SourceGenerator that analyzes C# code and produces OpenAPI specification files +- **Dictionary_Type**: Any .NET type that implements `IDictionary`, including `Dictionary`, `IDictionary`, and `IReadOnlyDictionary` +- **OpenAPI_Schema**: A JSON Schema-based structure that describes the shape of data in an OpenAPI specification +- **Additional_Properties_Schema**: An OpenAPI schema pattern using `additionalProperties` to describe objects with dynamic string keys and typed values +- **Value_Type**: The type of values stored in a dictionary (the `TValue` in `Dictionary`) +- **Simple_Type**: Primitive types (string, int, bool, etc.), enums, DateTime, DateOnly, TimeOnly, and Guid +- **Complex_Type**: Non-primitive types that require a `$ref` reference in OpenAPI schemas + +## Requirements + +### Requirement 1: Dictionary Type Detection + +**User Story:** As a developer using the source generator, I want dictionary types to be correctly identified, so that they are not incorrectly treated as complex objects. + +#### Acceptance Criteria + +1. WHEN the Source_Generator encounters a `Dictionary` type, THE Source_Generator SHALL identify it as a Dictionary_Type +2. WHEN the Source_Generator encounters an `IDictionary` type, THE Source_Generator SHALL identify it as a Dictionary_Type +3. WHEN the Source_Generator encounters an `IReadOnlyDictionary` type, THE Source_Generator SHALL identify it as a Dictionary_Type +4. WHEN the Source_Generator encounters a type implementing `IDictionary`, THE Source_Generator SHALL identify it as a Dictionary_Type +5. WHEN the Source_Generator identifies a Dictionary_Type, THE Source_Generator SHALL extract the Value_Type from the type arguments + +### Requirement 2: Dictionary Schema Generation with Simple Value Types + +**User Story:** As a developer, I want dictionaries with simple value types to generate correct OpenAPI schemas, so that API consumers understand the data structure. + +#### Acceptance Criteria + +1. WHEN the Source_Generator creates a schema for `Dictionary`, THE Source_Generator SHALL produce a schema with `type: "object"` and `additionalProperties: { type: "string" }` +2. WHEN the Source_Generator creates a schema for `Dictionary`, THE Source_Generator SHALL produce a schema with `type: "object"` and `additionalProperties: { type: "integer" }` +3. WHEN the Source_Generator creates a schema for `Dictionary`, THE Source_Generator SHALL produce a schema with `type: "object"` and `additionalProperties: { type: "boolean" }` +4. WHEN the Source_Generator creates a schema for `Dictionary`, THE Source_Generator SHALL produce a schema with `type: "object"` and `additionalProperties: { type: "number" }` +5. WHEN the Source_Generator creates a schema for `Dictionary`, THE Source_Generator SHALL produce a schema with `type: "object"` and `additionalProperties: { type: "string", format: "date-time" }` + +### Requirement 3: Dictionary Schema Generation with Complex Value Types + +**User Story:** As a developer, I want dictionaries with complex value types to generate schemas with proper references, so that nested types are correctly documented. + +#### Acceptance Criteria + +1. WHEN the Source_Generator creates a schema for a dictionary with a Complex_Type value, THE Source_Generator SHALL produce a schema with `additionalProperties` containing a `$ref` to the value type +2. WHEN the Source_Generator creates a schema for `Dictionary>`, THE Source_Generator SHALL produce a schema with `additionalProperties` containing an array schema +3. WHEN the Source_Generator creates a schema for `Dictionary>`, THE Source_Generator SHALL produce a schema with nested `additionalProperties` schemas + +### Requirement 4: Nullable Dictionary Handling + +**User Story:** As a developer, I want nullable dictionaries to be correctly represented in the schema, so that optional dictionary properties are properly documented. + +#### Acceptance Criteria + +1. WHEN the Source_Generator creates a schema for `Dictionary?` (nullable dictionary), THE Source_Generator SHALL produce a schema with `nullable: true` +2. WHEN the Source_Generator creates a schema for a dictionary property with nullable annotation, THE Source_Generator SHALL set `nullable: true` on the schema + +### Requirement 5: Dictionary Type Priority in Schema Creation + +**User Story:** As a developer, I want dictionary detection to occur before complex type handling, so that dictionaries are not incorrectly processed as regular objects. + +#### Acceptance Criteria + +1. WHEN the Source_Generator processes a type, THE Source_Generator SHALL check for Dictionary_Type before checking for Complex_Type +2. WHEN a Dictionary_Type is detected, THE Source_Generator SHALL NOT fall through to CreateComplexTypeSchema +3. WHEN a type is both a Dictionary_Type and has other properties, THE Source_Generator SHALL treat it as a Dictionary_Type (additionalProperties takes precedence) + +### Requirement 6: Schema Attribute Support for Dictionaries + +**User Story:** As a developer, I want to apply OpenApiSchema attributes to dictionary properties, so that I can customize the generated schema. + +#### Acceptance Criteria + +1. WHEN a dictionary property has an `[OpenApiSchema]` attribute with Description, THE Source_Generator SHALL apply the description to the dictionary schema +2. WHEN a dictionary property has an `[OpenApiSchema]` attribute with Example, THE Source_Generator SHALL apply the example to the dictionary schema diff --git a/.kiro/specs/dictionary-schema-support/tasks.md b/.kiro/specs/dictionary-schema-support/tasks.md new file mode 100644 index 0000000..6ff438e --- /dev/null +++ b/.kiro/specs/dictionary-schema-support/tasks.md @@ -0,0 +1,99 @@ +# Implementation Plan: Dictionary Schema Support + +## Overview + +This implementation adds proper dictionary type handling to the Oproto.Lambda.OpenApi source generator. The work is organized into three main phases: dictionary type detection, schema generation, and comprehensive testing. + +## Tasks + +- [ ] 1. Implement dictionary type detection + - [ ] 1.1 Add `IsDictionaryType` method to `OpenApiSpecGenerator_Types.cs` + - Implement detection for `Dictionary`, `IDictionary`, `IReadOnlyDictionary` + - Extract key and value type symbols from type arguments + - Check for interface implementation for custom dictionary types + - _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5_ + +- [ ] 2. Implement dictionary schema generation + - [ ] 2.1 Add `TryCreateDictionarySchema` method to `OpenApiSpecGenerator_Schema.cs` + - Create schema with `type: "object"` and `additionalProperties` + - Recursively call `CreateSchema` for the value type + - Apply `[OpenApiSchema]` attributes (Description, Example) to dictionary schema + - _Requirements: 2.1, 2.2, 2.3, 2.4, 2.5, 3.1, 3.2, 3.3, 6.1, 6.2_ + + - [ ] 2.2 Integrate dictionary detection into `CreateSchema` pipeline + - Add `TryCreateDictionarySchema` call after `TryCreateCollectionSchema` and before complex type handling + - Ensure dictionaries don't fall through to `CreateComplexTypeSchema` + - _Requirements: 5.1, 5.2, 5.3_ + + - [ ] 2.3 Handle nullable dictionary types + - Ensure nullable dictionaries (`Dictionary?`) produce `nullable: true` in schema + - Handle nullable reference type annotations on dictionary properties + - _Requirements: 4.1, 4.2_ + +- [ ] 3. Checkpoint - Verify implementation compiles + - Run `dotnet build` on the source generator project + - Ensure no compiler warnings or errors + - Ensure all tests pass, ask the user if questions arise + +- [ ] 4. Add unit tests for dictionary schema generation + - [ ] 4.1 Add basic dictionary type tests to `OpenApiGeneratorTests.cs` + - Test `Dictionary` produces correct schema + - Test `Dictionary` produces integer additionalProperties + - Test `Dictionary` produces boolean additionalProperties + - Test `Dictionary` produces number additionalProperties + - Test `Dictionary` produces string with date-time format + - _Requirements: 2.1, 2.2, 2.3, 2.4, 2.5_ + + - [ ] 4.2 Add complex value type tests + - Test `Dictionary` produces $ref in additionalProperties + - Test `Dictionary>` produces nested array schema + - Test `Dictionary>` produces nested dictionary schema + - _Requirements: 3.1, 3.2, 3.3_ + + - [ ] 4.3 Add nullable dictionary tests + - Test nullable dictionary property produces `nullable: true` + - Test `Dictionary?` produces `nullable: true` + - _Requirements: 4.1, 4.2_ + + - [ ] 4.4 Add dictionary interface tests + - Test `IDictionary` is detected as dictionary + - Test `IReadOnlyDictionary` is detected as dictionary + - _Requirements: 1.2, 1.3_ + + - [ ] 4.5 Add attribute support tests + - Test `[OpenApiSchema(Description = "...")]` applies to dictionary schema + - Test `[OpenApiSchema(Example = "...")]` applies to dictionary schema + - _Requirements: 6.1, 6.2_ + +- [ ] 5. Add property-based tests for dictionary handling + - [ ] 5.1 Write property test for dictionary type detection + - **Property 1: Dictionary Type Detection** + - **Validates: Requirements 1.1, 1.2, 1.3, 1.4, 1.5** + + - [ ] 5.2 Write property test for dictionary schema structure + - **Property 2: Dictionary Schema Structure** + - **Validates: Requirements 5.2** + + - [ ] 5.3 Write property test for simple value type mapping + - **Property 3: Simple Value Type Schema** + - **Validates: Requirements 2.1, 2.2, 2.3, 2.4, 2.5** + + - [ ] 5.4 Write property test for complex value type references + - **Property 4: Complex Value Type Reference** + - **Validates: Requirements 3.1** + + - [ ] 5.5 Write property test for nullable dictionary handling + - **Property 5: Nullable Dictionary Handling** + - **Validates: Requirements 4.1, 4.2** + +- [ ] 6. Final checkpoint - Ensure all tests pass + - Run full test suite with `dotnet test` + - Verify no regressions in existing functionality + - Ensure all tests pass, ask the user if questions arise + +## Notes + +- Each task references specific requirements for traceability +- Checkpoints ensure incremental validation +- Property tests validate universal correctness properties +- Unit tests validate specific examples and edge cases diff --git a/.kiro/specs/lambda-merge-tool/design.md b/.kiro/specs/lambda-merge-tool/design.md new file mode 100644 index 0000000..e3c0c10 --- /dev/null +++ b/.kiro/specs/lambda-merge-tool/design.md @@ -0,0 +1,827 @@ +# Design Document: Lambda Merge Tool + +## Overview + +This design describes an AWS Lambda-based OpenAPI merge solution that automatically merges multiple OpenAPI specification files when changes are detected in S3. The architecture uses S3 event notifications, Step Functions for debouncing, and a CDK construct library for easy deployment. + +The solution consists of three main components: +1. **Merge Lambda** - A Lambda function that performs the actual merge operation +2. **Debounce State Machine** - A Step Functions workflow that batches rapid changes +3. **CDK Construct** - A reusable infrastructure component for easy deployment + +```mermaid +flowchart TB + subgraph S3["S3 Input Bucket"] + Config["/{prefix}/config.json"] + Spec1["/{prefix}/service1.json"] + Spec2["/{prefix}/service2.json"] + end + + subgraph EventBridge["EventBridge"] + Rule["S3 Event Rule
(filtered by prefix)"] + end + + subgraph StepFunctions["Step Functions"] + Debounce["Debounce State Machine"] + Wait["Wait State
(5s default)"] + Invoke["Invoke Lambda"] + end + + subgraph Lambda["Lambda"] + MergeFn["Merge Function"] + end + + subgraph Output["S3 Output"] + Merged["/{prefix}/merged.json"] + end + + S3 -->|"Object Created/Modified/Deleted"| Rule + Rule -->|"Start Execution"| Debounce + Debounce --> Wait + Wait --> Invoke + Invoke --> MergeFn + MergeFn -->|"Read Config"| Config + MergeFn -->|"Read Specs"| Spec1 + MergeFn -->|"Read Specs"| Spec2 + MergeFn -->|"Write if Changed"| Merged +``` + +## Architecture + +### Component Interaction Flow + +1. **S3 Event** → User uploads/modifies a file in `{prefix}/` +2. **EventBridge Rule** → Filters events by prefix pattern, triggers Step Functions +3. **Debounce State Machine** → Waits for configurable duration, resets on new events +4. **Merge Lambda** → Loads config, discovers/loads sources, merges, compares, writes if changed + +### Debounce Strategy + +The debounce mechanism uses Step Functions with a DynamoDB table to track active executions per prefix. The key challenge is handling events that arrive *during* merge execution - we need to ensure those changes get merged too. + +**Solution**: After the merge completes, we check if any new events arrived during execution. If so, we loop back and merge again. + +```mermaid +stateDiagram-v2 + [*] --> CheckExisting: S3 Event Received + CheckExisting --> UpdateTimestamp: Execution exists for prefix + CheckExisting --> CreateExecution: No existing execution + UpdateTimestamp --> [*]: Return (let existing execution handle) + CreateExecution --> WaitState: Start new execution + WaitState --> CheckForUpdates: Wait expires + CheckForUpdates --> WaitState: Newer timestamp found (reset wait) + CheckForUpdates --> RecordMergeStart: No newer events + RecordMergeStart --> InvokeMerge: Record merge start time + InvokeMerge --> CheckPostMerge: Merge complete + CheckPostMerge --> WaitState: Events arrived during merge + CheckPostMerge --> Cleanup: No new events + Cleanup --> [*]: Remove execution record +``` + +**DynamoDB Record Structure**: +```json +{ + "prefix": "publicapi", // Partition key + "executionId": "exec-123", // Current owner execution + "lastEventTime": "2025-01-01T12:00:00Z", // Last S3 event timestamp + "mergeStartTime": "2025-01-01T12:00:05Z", // When merge started (null if waiting) + "ttl": 1735689600 // Auto-cleanup after 5 minutes +} +``` + +**Race Condition Handling**: +1. When a new event arrives, it updates `lastEventTime` +2. Before invoking merge, we record `mergeStartTime` +3. After merge completes, we check if `lastEventTime > mergeStartTime` +4. If yes, new events arrived during merge → loop back to wait state +5. If no, we're done → cleanup and exit + +This ensures no events are "lost" even if they arrive during merge execution. + +### Project Structure + +``` +Oproto.Lambda.OpenApi.Merge.Lambda/ +├── Functions/ +│ └── MergeFunction.cs # Lambda handler with Annotations +├── Services/ +│ ├── IS3Service.cs # S3 operations interface +│ ├── S3Service.cs # S3 operations implementation +│ ├── IConfigLoader.cs # Config loading interface +│ ├── ConfigLoader.cs # Config loading implementation +│ ├── ISourceDiscovery.cs # Source file discovery interface +│ └── SourceDiscovery.cs # Source file discovery implementation +├── Models/ +│ ├── LambdaMergeConfig.cs # Extended config for Lambda +│ └── MergeResponse.cs # Lambda response model +├── Oproto.Lambda.OpenApi.Merge.Lambda.csproj +└── serverless.template + +Oproto.Lambda.OpenApi.Merge.Cdk/ +├── OpenApiMergeConstruct.cs # Main CDK construct +├── OpenApiMergeConstructProps.cs # Construct properties +├── Oproto.Lambda.OpenApi.Merge.Cdk.csproj +└── cloudformation/ + └── openapi-merge.yaml # Standalone CFN template +``` + +## Components and Interfaces + +### Lambda Function + +```csharp +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Functions; + +using Amazon.Lambda.Annotations; +using Amazon.Lambda.Core; +using Amazon.Lambda.S3Events; + +public class MergeFunction +{ + private readonly IS3Service _s3Service; + private readonly IConfigLoader _configLoader; + private readonly ISourceDiscovery _sourceDiscovery; + private readonly ILogger _logger; + + public MergeFunction( + IS3Service s3Service, + IConfigLoader configLoader, + ISourceDiscovery sourceDiscovery, + ILogger logger) + { + _s3Service = s3Service; + _configLoader = configLoader; + _sourceDiscovery = sourceDiscovery; + _logger = logger; + } + + [LambdaFunction] + public async Task HandleMerge( + MergeRequest request, + ILambdaContext context) + { + // Implementation handles: + // 1. Extract prefix from request + // 2. Load config from {prefix}/config.json + // 3. Discover or load explicit sources + // 4. Perform merge using OpenApiMerger + // 5. Compare with existing output + // 6. Write if changed + // 7. Return response with metrics + } +} +``` + +### S3 Service Interface + +```csharp +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +public interface IS3Service +{ + /// + /// Reads a JSON file from S3 and deserializes it. + /// + Task ReadJsonAsync(string bucket, string key, CancellationToken ct = default); + + /// + /// Reads raw text content from S3. + /// + Task ReadTextAsync(string bucket, string key, CancellationToken ct = default); + + /// + /// Writes JSON content to S3. + /// + Task WriteJsonAsync(string bucket, string key, T content, CancellationToken ct = default); + + /// + /// Lists objects with a given prefix. + /// + Task> ListObjectsAsync(string bucket, string prefix, CancellationToken ct = default); + + /// + /// Checks if an object exists. + /// + Task ExistsAsync(string bucket, string key, CancellationToken ct = default); +} +``` + +### Config Loader Interface + +```csharp +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +public interface IConfigLoader +{ + /// + /// Loads and validates the merge configuration from S3. + /// + Task LoadConfigAsync( + string bucket, + string prefix, + CancellationToken ct = default); +} +``` + +### Source Discovery Interface + +```csharp +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +public interface ISourceDiscovery +{ + /// + /// Discovers source files based on configuration. + /// + Task> DiscoverSourcesAsync( + string bucket, + string prefix, + LambdaMergeConfig config, + CancellationToken ct = default); +} + +public record DiscoveredSource( + string Key, + string Name, + SourceConfiguration? ExplicitConfig); +``` + +### CDK Construct Interface + +```csharp +namespace Oproto.Lambda.OpenApi.Merge.Cdk; + +public class OpenApiMergeConstructProps +{ + /// + /// The S3 bucket containing input files. Required. + /// + public required IBucket InputBucket { get; init; } + + /// + /// The S3 bucket for output files. Defaults to InputBucket. + /// + public IBucket? OutputBucket { get; init; } + + /// + /// List of API prefixes to monitor (e.g., "publicapi/", "internalapi/"). + /// + public required IReadOnlyList ApiPrefixes { get; init; } + + /// + /// Debounce wait duration in seconds. Default: 5. + /// + public int DebounceSeconds { get; init; } = 5; + + /// + /// Whether to create CloudWatch alarms. Default: true. + /// + public bool EnableAlarms { get; init; } = true; + + /// + /// Failure count threshold for alarms. Default: 1. + /// + public int AlarmThreshold { get; init; } = 1; + + /// + /// Number of evaluation periods for alarms. Default: 1. + /// + public int AlarmEvaluationPeriods { get; init; } = 1; + + /// + /// Optional SNS topic for alarm notifications. + /// + public ITopic? AlarmTopic { get; init; } + + /// + /// Lambda memory size in MB. Default: 512. + /// + public int MemorySize { get; init; } = 512; + + /// + /// Lambda timeout in seconds. Default: 60. + /// + public int TimeoutSeconds { get; init; } = 60; +} +``` + +## Data Models + +### Lambda Merge Configuration + +The Lambda configuration extends the existing `MergeConfiguration` to maintain backwards compatibility. The new fields are also being added to the base `MergeConfiguration` class so they work with the CLI tool as well. + +**Base MergeConfiguration Changes** (in Oproto.Lambda.OpenApi.Merge): + +```csharp +namespace Oproto.Lambda.OpenApi.Merge; + +public class MergeConfiguration +{ + // ... existing properties ... + + /// + /// Whether to auto-discover source files in the directory. + /// When true, ignores the sources list and discovers all .json files. + /// Default: false (use explicit sources list). + /// + [JsonPropertyName("autoDiscover")] + public bool AutoDiscover { get; set; } = false; + + /// + /// Glob patterns for files to exclude from auto-discovery. + /// Only used when autoDiscover is true. + /// Always excludes the output file automatically. + /// + [JsonPropertyName("excludePatterns")] + public List ExcludePatterns { get; set; } = new(); +} +``` + +**Lambda-Specific Extension** (in Oproto.Lambda.OpenApi.Merge.Lambda): + +```csharp +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +public class LambdaMergeConfig : MergeConfiguration +{ + /// + /// Output bucket name. If not specified, uses input bucket. + /// Only applicable in Lambda context. + /// + [JsonPropertyName("outputBucket")] + public string? OutputBucket { get; set; } +} +``` + +**Backwards Compatibility**: +- Existing configs without `autoDiscover` default to `false` (explicit sources) +- Existing configs without `excludePatterns` default to empty list +- All existing `MergeConfiguration` properties are preserved +- CLI tool gains `autoDiscover` and `excludePatterns` support automatically +``` + +### Merge Request/Response + +```csharp +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +public record MergeRequest( + string InputBucket, + string Prefix, + string? OutputBucket = null); + +public record MergeResponse( + bool Success, + string Message, + MergeMetrics Metrics, + IReadOnlyList? Warnings = null, + string? Error = null); + +public record MergeMetrics( + int SourceFilesProcessed, + int SchemasmergedCount, + int PathsMergedCount, + long DurationMs, + bool OutputWritten, + string? OutputKey = null); +``` + +### Example Config File + +```json +{ + "info": { + "title": "Public API", + "version": "1.0.0", + "description": "Merged public API specification" + }, + "servers": [ + { + "url": "https://api.example.com/v1", + "description": "Production" + } + ], + "autoDiscover": true, + "excludePatterns": ["*-draft.json", "*.backup.json"], + "output": "merged-openapi.json", + "schemaConflict": "rename" +} +``` + +### Example Config with Explicit Sources + +```json +{ + "info": { + "title": "Internal API", + "version": "2.0.0" + }, + "autoDiscover": false, + "sources": [ + { + "path": "users-service.json", + "name": "Users", + "pathPrefix": "/users" + }, + { + "path": "orders-service.json", + "name": "Orders", + "pathPrefix": "/orders" + } + ], + "output": "internal-api.json", + "schemaConflict": "rename" +} +``` + +## Step Functions State Machine (JSONata) + +```json +{ + "Comment": "Debounce OpenAPI merge operations per API prefix with post-merge event checking", + "QueryLanguage": "JSONata", + "StartAt": "ExtractPrefix", + "States": { + "ExtractPrefix": { + "Type": "Pass", + "Output": { + "prefix": "{% $substringBefore($states.input.detail.object.key, '/') & '/' & ($count($split($states.input.detail.object.key, '/')) > 2 ? $join($filter($split($states.input.detail.object.key, '/'), function($v, $i) { $i < $count($split($states.input.detail.object.key, '/')) - 1 }), '/') : $substringBefore($states.input.detail.object.key, '/')) %}", + "bucket": "{% $states.input.detail.bucket.name %}", + "eventTime": "{% $states.input.time %}" + }, + "Next": "CheckExistingExecution" + }, + "CheckExistingExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": {"S": "{% $states.input.prefix %}"} + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "eventTime": "{% $states.input.eventTime %}", + "exists": "{% $exists($states.result.Item) %}", + "existingExecutionId": "{% $states.result.Item.executionId.S %}" + }, + "Next": "BranchOnExisting" + }, + "BranchOnExisting": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.exists = true %}", + "Next": "UpdateEventTimestamp" + } + ], + "Default": "CreateExecution" + }, + "UpdateEventTimestamp": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:updateItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": {"S": "{% $states.input.prefix %}"} + }, + "UpdateExpression": "SET lastEventTime = :ts", + "ExpressionAttributeValues": { + ":ts": {"S": "{% $states.input.eventTime %}"} + } + }, + "Comment": "Update timestamp and exit - existing execution will handle", + "End": true + }, + "CreateExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:putItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Item": { + "prefix": {"S": "{% $states.input.prefix %}"}, + "executionId": {"S": "{% $states.context.Execution.Id %}"}, + "lastEventTime": {"S": "{% $states.input.eventTime %}"}, + "ttl": {"N": "{% $string($floor(($toMillis($now()) / 1000) + 300)) %}"} + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}" + }, + "Next": "WaitForDebounce" + }, + "WaitForDebounce": { + "Type": "Wait", + "Seconds": "${DebounceSeconds}", + "Next": "CheckForNewerEvents" + }, + "CheckForNewerEvents": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": {"S": "{% $states.input.prefix %}"} + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "isOwner": "{% $states.result.Item.executionId.S = $states.context.Execution.Id %}", + "lastEventTime": "{% $states.result.Item.lastEventTime.S %}", + "mergeStartTime": "{% $states.result.Item.mergeStartTime.S %}" + }, + "Next": "BranchOnOwnership" + }, + "BranchOnOwnership": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.isOwner = false %}", + "Next": "AbandonExecution" + } + ], + "Default": "RecordMergeStart" + }, + "AbandonExecution": { + "Type": "Succeed", + "Comment": "Another execution took over" + }, + "RecordMergeStart": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:updateItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": {"S": "{% $states.input.prefix %}"} + }, + "UpdateExpression": "SET mergeStartTime = :ts", + "ExpressionAttributeValues": { + ":ts": {"S": "{% $now() %}"} + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $now() %}" + }, + "Next": "InvokeMergeLambda" + }, + "InvokeMergeLambda": { + "Type": "Task", + "Resource": "arn:aws:states:::lambda:invoke", + "Arguments": { + "FunctionName": "${MergeFunctionArn}", + "Payload": { + "inputBucket": "{% $states.input.bucket %}", + "prefix": "{% $states.input.prefix %}", + "outputBucket": "${OutputBucket}" + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "mergeResult": "{% $states.result.Payload %}" + }, + "Next": "CheckPostMergeEvents", + "Catch": [ + { + "ErrorEquals": ["States.ALL"], + "Next": "CheckPostMergeEvents", + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "mergeError": "{% $states.error %}" + } + } + ] + }, + "CheckPostMergeEvents": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": {"S": "{% $states.input.prefix %}"} + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "lastEventTime": "{% $states.result.Item.lastEventTime.S %}", + "hasNewerEvents": "{% $toMillis($states.result.Item.lastEventTime.S) > $toMillis($states.input.mergeStartTime) %}" + }, + "Next": "BranchOnPostMergeEvents" + }, + "BranchOnPostMergeEvents": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.hasNewerEvents = true %}", + "Next": "WaitForDebounce", + "Comment": "Events arrived during merge - loop back" + } + ], + "Default": "CleanupExecution" + }, + "CleanupExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:deleteItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": {"S": "{% $states.input.prefix %}"} + } + }, + "End": true + } + } +} +``` + + + +## Correctness Properties + +*A property is a characteristic or behavior that should hold true across all valid executions of a system—essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.* + +### Property 1: Prefix Extraction Consistency + +*For any* valid S3 key containing a prefix structure (e.g., `publicapi/config.json`, `internal/v2/service.json`), extracting the API prefix SHALL always return the directory path portion before the filename, and this extraction SHALL be deterministic (same input always produces same output). + +**Validates: Requirements 1.1, 1.2, 1.3, 1.4** + +### Property 2: Config Compatibility Round-Trip + +*For any* valid `MergeConfiguration` object from the existing library, serializing it to JSON and deserializing it as `LambdaMergeConfig` SHALL preserve all original property values, and the `LambdaMergeConfig` SHALL have sensible defaults for Lambda-specific properties (`autoDiscover = false`). + +**Validates: Requirements 2.3, 3.1** + +### Property 3: Auto-Discovery Filtering + +*For any* set of S3 keys within a prefix, when `autoDiscover` is true, the discovered sources SHALL include only `.json` files, SHALL exclude `config.json`, SHALL exclude the configured output file, and SHALL exclude files matching any `excludePatterns`. + +**Validates: Requirements 3.2** + +### Property 4: Explicit Sources Validation + +*For any* `LambdaMergeConfig` where `autoDiscover` is false, the config SHALL be considered invalid if the `sources` array is null or empty. + +**Validates: Requirements 3.3** + +### Property 5: Output Comparison Normalization + +*For any* two OpenAPI documents that are semantically equivalent (same paths, schemas, operations) but differ only in JSON formatting (whitespace, property order), the comparison function SHALL return true (equal). + +**Validates: Requirements 5.2, 5.5** + +### Property 6: Conditional Write Correctness + +*For any* merge operation, if the new merged spec is semantically identical to the existing output spec, the write operation SHALL be skipped. If they differ, the write operation SHALL occur. + +**Validates: Requirements 5.3, 5.4** + +### Property 7: Output Path Construction + +*For any* prefix and output filename configuration, the constructed output path SHALL be `{prefix}/{output}` for single-bucket mode, and the output key SHALL always be relative to the configured output bucket. + +**Validates: Requirements 6.3, 6.5** + +## Error Handling + +### Configuration Errors + +| Error Condition | Response | HTTP-equivalent | +|----------------|----------|-----------------| +| Config file not found | `MergeResponse` with `Success=false`, `Error="Configuration file not found at {path}"` | 404 | +| Invalid JSON in config | `MergeResponse` with `Success=false`, `Error="Invalid JSON: {details}"` | 400 | +| Missing required fields | `MergeResponse` with `Success=false`, `Error="Missing required field: {field}"` | 400 | +| Empty sources (autoDiscover=false) | `MergeResponse` with `Success=false`, `Error="No sources specified and autoDiscover is disabled"` | 400 | + +### Source File Errors + +| Error Condition | Response | Behavior | +|----------------|----------|----------| +| Source file not found | Warning logged, file skipped | Continue with remaining | +| Invalid OpenAPI spec | Warning logged, file skipped | Continue with remaining | +| All sources invalid | `MergeResponse` with `Success=false`, `Error="No valid source files found"` | Fail | + +### Merge Errors + +| Error Condition | Response | +|----------------|----------| +| Schema conflict (when strategy=Fail) | `MergeResponse` with `Success=false`, `Error="Schema conflict: {details}"` | +| Path conflict | `MergeResponse` with `Success=false`, `Error="Duplicate path: {path}"` | + +### S3 Errors + +| Error Condition | Response | +|----------------|----------| +| Access denied | `MergeResponse` with `Success=false`, `Error="Access denied to {bucket}/{key}"` | +| Bucket not found | `MergeResponse` with `Success=false`, `Error="Bucket not found: {bucket}"` | +| Write failure | `MergeResponse` with `Success=false`, `Error="Failed to write output: {details}"` | + +## Testing Strategy + +### Dual Testing Approach + +This feature uses both unit tests and property-based tests: + +- **Unit tests**: Verify specific examples, edge cases, and error conditions +- **Property tests**: Verify universal properties across all valid inputs using FsCheck + +### Property-Based Testing Configuration + +- **Library**: FsCheck with xUnit integration +- **Minimum iterations**: 100 per property test +- **Tag format**: `Feature: lambda-merge-tool, Property {number}: {property_text}` + +### Test Categories + +#### Unit Tests + +1. **Prefix Extraction** + - Extract prefix from `publicapi/config.json` → `publicapi` + - Extract prefix from `internal/v2/service.json` → `internal/v2` + - Handle root-level files (no prefix) + +2. **Config Loading** + - Load valid config with all fields + - Load minimal config with defaults + - Error on missing config file + - Error on invalid JSON + +3. **Source Discovery** + - Auto-discover finds all JSON files + - Auto-discover excludes config.json + - Auto-discover excludes output file + - Auto-discover applies exclude patterns + - Explicit sources loads specified files + +4. **Output Comparison** + - Identical specs return equal + - Different specs return not equal + - Formatting differences ignored + +5. **Error Handling** + - Missing source file logged and skipped + - Invalid OpenAPI logged and skipped + - All sources invalid returns error + +#### Property Tests + +1. **PrefixExtractionProperty** - Property 1 + - Generate random valid S3 keys + - Verify prefix extraction is deterministic + - Verify extracted prefix + filename = original key + +2. **ConfigCompatibilityProperty** - Property 2 + - Generate random MergeConfiguration objects + - Serialize to JSON, deserialize as LambdaMergeConfig + - Verify all original properties preserved + +3. **AutoDiscoveryFilteringProperty** - Property 3 + - Generate random sets of S3 keys + - Apply auto-discovery filtering + - Verify only valid sources included + +4. **ExplicitSourcesValidationProperty** - Property 4 + - Generate configs with autoDiscover=false + - Verify validation fails when sources empty + +5. **OutputComparisonNormalizationProperty** - Property 5 + - Generate random OpenAPI documents + - Create formatting variations + - Verify comparison returns equal + +6. **ConditionalWriteProperty** - Property 6 + - Generate merge scenarios + - Verify write occurs only when content differs + +7. **OutputPathConstructionProperty** - Property 7 + - Generate random prefixes and output filenames + - Verify path construction is correct + +### Test File Structure + +``` +Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ +├── PrefixExtractionTests.cs +├── PrefixExtractionPropertyTests.cs +├── ConfigLoaderTests.cs +├── ConfigCompatibilityPropertyTests.cs +├── SourceDiscoveryTests.cs +├── AutoDiscoveryPropertyTests.cs +├── OutputComparisonTests.cs +├── OutputComparisonPropertyTests.cs +├── ConditionalWritePropertyTests.cs +├── OutputPathPropertyTests.cs +└── GlobalUsings.cs +``` diff --git a/.kiro/specs/lambda-merge-tool/requirements.md b/.kiro/specs/lambda-merge-tool/requirements.md new file mode 100644 index 0000000..763fa4d --- /dev/null +++ b/.kiro/specs/lambda-merge-tool/requirements.md @@ -0,0 +1,185 @@ +# Requirements Document + +## Introduction + +This feature provides an AWS Lambda-based implementation of the OpenAPI merge tool that automatically merges multiple OpenAPI specification files when changes are detected in S3. The solution uses S3 event notifications to trigger merges, supports flexible bucket configurations, and includes debouncing to handle rapid successive changes efficiently. As part of an OSS project, the deployment model prioritizes ease of adoption through a .NET CDK construct library. + +## Glossary + +- **Merge_Lambda**: The AWS Lambda function that performs OpenAPI specification merging +- **Config_File**: A JSON configuration file that defines merge settings and source file references for a specific API +- **Source_Spec**: An individual OpenAPI specification file to be merged +- **Output_Spec**: The resulting merged OpenAPI specification file +- **API_Prefix**: An S3 key prefix that groups related config and source files (e.g., `/publicapi/`, `/internalapi/`) +- **Debounce_State_Machine**: An AWS Step Functions state machine that delays merge execution to batch rapid changes +- **CDK_Construct**: A reusable AWS CDK component that encapsulates the Lambda merge infrastructure +- **Input_Bucket**: The S3 bucket containing config files and source OpenAPI specs +- **Output_Bucket**: The S3 bucket where merged specs are written (may be same as Input_Bucket) + +## Requirements + +### Requirement 1: S3 Event-Triggered Merge Execution + +**User Story:** As a developer, I want the merge process to automatically trigger when I upload or modify OpenAPI spec files in S3, so that my merged API documentation stays current without manual intervention. + +#### Acceptance Criteria + +1. WHEN an S3 object is created or modified with a key matching `{prefix}/config.json`, THE Merge_Lambda SHALL initiate a merge operation for that API prefix +2. WHEN an S3 object is created or modified with a key matching `{prefix}/*.json` (excluding config.json), THE Merge_Lambda SHALL initiate a merge operation for that API prefix +3. WHEN an S3 object is deleted with a key matching `{prefix}/*.json`, THE Merge_Lambda SHALL initiate a merge operation for that API prefix +4. THE Merge_Lambda SHALL extract the API_Prefix from the S3 event key to determine which config file to load +5. WHEN multiple S3 events occur within a configurable debounce window, THE Debounce_State_Machine SHALL consolidate them into a single merge execution + +### Requirement 2: Configuration File Loading + +**User Story:** As a developer, I want to define my merge configuration in a JSON file within the same S3 prefix, so that each API's merge settings are self-contained and version-controlled. + +#### Acceptance Criteria + +1. WHEN a merge is triggered, THE Merge_Lambda SHALL load the config file from `{prefix}/config.json` in the Input_Bucket +2. THE Config_File SHALL support specifying source file patterns relative to the API_Prefix +3. THE Config_File SHALL support all existing MergeConfiguration options from the Oproto.Lambda.OpenApi.Merge library +4. IF the Config_File does not exist, THEN THE Merge_Lambda SHALL return an error and log the missing configuration +5. IF the Config_File contains invalid JSON, THEN THE Merge_Lambda SHALL return an error with descriptive message + +### Requirement 3: Source File Discovery and Loading + +**User Story:** As a developer, I want flexibility in how source files are discovered—either auto-discovery or explicit listing—so that I can control which files are merged during development. + +#### Acceptance Criteria + +1. THE MergeConfiguration class SHALL support an `autoDiscover` boolean property (default: false) +2. THE MergeConfiguration class SHALL support an `excludePatterns` array for glob patterns to exclude +3. WHEN `autoDiscover` is true, THE Merge_Lambda SHALL list all `.json` files within the API_Prefix (excluding config.json, output file, and files matching excludePatterns) +4. WHEN `autoDiscover` is false, THE Config_File SHALL contain a `sources` array listing explicit file names to merge +5. THE Merge_Lambda SHALL load each source file from the Input_Bucket +6. IF a source file cannot be parsed as valid OpenAPI, THEN THE Merge_Lambda SHALL log a warning and skip that file +7. IF no valid source files are found, THEN THE Merge_Lambda SHALL return an error indicating no sources to merge +8. WHEN using explicit sources, IF a listed file does not exist, THEN THE Merge_Lambda SHALL log an error for that file and continue with remaining files +9. THE CLI merge tool SHALL also support `autoDiscover` and `excludePatterns` for consistency + +### Requirement 4: Merge Execution + +**User Story:** As a developer, I want the Lambda to use the existing Oproto.Lambda.OpenApi.Merge library for merging, so that I get consistent behavior with the CLI tool. + +#### Acceptance Criteria + +1. THE Merge_Lambda SHALL use the OpenApiMerger class from Oproto.Lambda.OpenApi.Merge to perform merges +2. THE Merge_Lambda SHALL pass the loaded Config_File settings to the merger +3. WHEN the merge completes successfully, THE Merge_Lambda SHALL produce a valid OpenAPI specification +4. IF the merge fails due to conflicts, THEN THE Merge_Lambda SHALL return an error with conflict details + +### Requirement 5: Output Comparison and Conditional Write + +**User Story:** As a developer, I want the Lambda to only write the output file when the merged result differs from the existing output, so that downstream processes aren't triggered unnecessarily. + +#### Acceptance Criteria + +1. WHEN a merge completes, THE Merge_Lambda SHALL read the existing Output_Spec from the Output_Bucket if it exists +2. THE Merge_Lambda SHALL compare the new merged spec with the existing Output_Spec +3. IF the merged spec differs from the existing Output_Spec, THEN THE Merge_Lambda SHALL write the new spec to the Output_Bucket +4. IF the merged spec is identical to the existing Output_Spec, THEN THE Merge_Lambda SHALL skip writing and log that no changes were detected +5. THE comparison SHALL be performed on normalized JSON to ignore formatting differences + +### Requirement 6: Flexible Bucket Configuration + +**User Story:** As a developer, I want to configure whether input and output use the same bucket or separate buckets, so that I can adapt to different organizational requirements. + +#### Acceptance Criteria + +1. THE CDK_Construct SHALL support a single-bucket mode where Input_Bucket and Output_Bucket are the same +2. THE CDK_Construct SHALL support a dual-bucket mode where Input_Bucket and Output_Bucket are different +3. WHEN using single-bucket mode, THE Output_Spec SHALL be written to a configurable output path within the same bucket +4. WHEN using dual-bucket mode, THE Merge_Lambda SHALL have read permissions on Input_Bucket and write permissions on Output_Bucket +5. THE Config_File SHALL specify the output file path relative to the Output_Bucket + +### Requirement 7: Debounce via Step Functions + +**User Story:** As a developer, I want rapid successive file changes to be debounced into a single merge operation, so that I don't waste compute resources on intermediate states. + +#### Acceptance Criteria + +1. WHEN an S3 event triggers the system, THE Debounce_State_Machine SHALL start or reset a wait timer for that API_Prefix +2. THE Debounce_State_Machine SHALL use a configurable wait duration (default 5 seconds) +3. WHEN the wait timer expires without new events for the same API_Prefix, THE Debounce_State_Machine SHALL invoke the Merge_Lambda +4. IF a new event arrives for the same API_Prefix during the wait period, THE Debounce_State_Machine SHALL reset the timer +5. THE Debounce_State_Machine SHALL track separate timers for each distinct API_Prefix +6. THE Debounce_State_Machine SHALL be defined using JSONata query language for data transformations where possible + +### Requirement 8: CDK Construct Library for Easy Deployment + +**User Story:** As an OSS consumer, I want a simple CDK construct that I can add to my infrastructure code, so that I can deploy the merge Lambda with minimal configuration. + +#### Acceptance Criteria + +1. THE CDK_Construct SHALL be published as a separate NuGet package (Oproto.Lambda.OpenApi.Merge.Cdk) +2. THE CDK_Construct SHALL accept Input_Bucket as a required parameter +3. THE CDK_Construct SHALL accept Output_Bucket as an optional parameter (defaults to Input_Bucket) +4. THE CDK_Construct SHALL accept a list of API_Prefix values to configure S3 event filters +5. THE CDK_Construct SHALL create all necessary IAM roles and permissions +6. THE CDK_Construct SHALL create the Debounce_State_Machine with configurable wait duration +7. THE CDK_Construct SHALL expose the created Lambda function ARN and Step Function ARN as outputs + +### Requirement 8a: CloudFormation Template for Non-CDK Users + +**User Story:** As a developer who doesn't use CDK, I want a CloudFormation template that I can deploy directly, so that I can use the merge Lambda without adopting CDK. + +#### Acceptance Criteria + +1. THE project SHALL include a standalone CloudFormation template (YAML format) +2. THE CloudFormation template SHALL accept parameters for Input_Bucket, Output_Bucket, and API_Prefix list +3. THE CloudFormation template SHALL create the same resources as the CDK_Construct +4. THE CloudFormation template SHALL be documented with deployment instructions +5. THE CDK_Construct SHALL be capable of synthesizing to the CloudFormation template for consistency + +### Requirement 9: Multi-API Support with Single Deployment + +**User Story:** As a developer, I want a single Lambda deployment to handle multiple API prefixes, so that I minimize infrastructure overhead. + +#### Acceptance Criteria + +1. THE Merge_Lambda SHALL be capable of processing events for any API_Prefix +2. THE CDK_Construct SHALL configure S3 event notifications for all specified API_Prefix values +3. WHEN processing an event, THE Merge_Lambda SHALL dynamically determine the API_Prefix from the S3 key +4. THE Merge_Lambda SHALL maintain no state between invocations (stateless design) + +### Requirement 10: Error Handling and Observability + +**User Story:** As an operator, I want comprehensive logging and configurable metrics/alarms, so that I can troubleshoot issues and monitor the merge process according to my needs. + +#### Acceptance Criteria + +1. THE Merge_Lambda SHALL log the start and completion of each merge operation with timing information +2. THE Merge_Lambda SHALL log all S3 read and write operations +3. IF an error occurs, THEN THE Merge_Lambda SHALL log the full error details including stack trace +4. THE Merge_Lambda SHALL emit CloudWatch metrics for: merge duration, success count, failure count, files processed +5. THE CDK_Construct SHALL accept an `enableAlarms` boolean parameter (default: true) +6. WHEN `enableAlarms` is true, THE CDK_Construct SHALL create a CloudWatch alarm for merge failures +7. THE CDK_Construct SHALL accept an `alarmThreshold` parameter for failure count threshold (default: 1) +8. THE CDK_Construct SHALL accept an `alarmEvaluationPeriods` parameter (default: 1) +9. THE CDK_Construct SHALL accept an optional SNS topic ARN for alarm notifications + +### Requirement 11: Lambda Annotations Integration + +**User Story:** As a developer familiar with the Oproto.Lambda.OpenApi library, I want the merge Lambda to use Lambda Annotations, so that the codebase is consistent and I can learn from the implementation. + +#### Acceptance Criteria + +1. THE Merge_Lambda SHALL be implemented using Amazon.Lambda.Annotations +2. THE Merge_Lambda SHALL use the S3 event source binding +3. THE Merge_Lambda SHALL follow the same coding patterns as the Oproto.Lambda.OpenApi.Examples project + +### Requirement 12: Documentation and Changelog Updates + +**User Story:** As a user of the library, I want comprehensive documentation for the new Lambda merge tool and updated changelogs, so that I can understand how to use and deploy it. + +#### Acceptance Criteria + +1. THE project SHALL update CHANGELOG.md with all new features and breaking changes +2. THE project SHALL create docs/lambda-merge.md with deployment and usage instructions +3. THE documentation SHALL include example config files for both auto-discover and explicit sources modes +4. THE documentation SHALL include CDK construct usage examples +5. THE documentation SHALL include CloudFormation deployment instructions +6. THE documentation SHALL document the debounce behavior and timing considerations +7. THE existing docs/merge-tool.md SHALL be updated to document `autoDiscover` and `excludePatterns` options +8. THE README.md SHALL be updated to reference the new Lambda merge tool diff --git a/.kiro/specs/lambda-merge-tool/tasks.md b/.kiro/specs/lambda-merge-tool/tasks.md new file mode 100644 index 0000000..20dffe0 --- /dev/null +++ b/.kiro/specs/lambda-merge-tool/tasks.md @@ -0,0 +1,241 @@ +# Implementation Plan: Lambda Merge Tool + +## Overview + +This implementation plan creates an AWS Lambda-based OpenAPI merge solution with S3 event triggers, Step Functions debouncing, and a CDK construct for easy deployment. The implementation is divided into phases: base library updates, Lambda function, CDK construct, and documentation. + +## Tasks + +- [x] 1. Update base MergeConfiguration with auto-discover support + - [x] 1.1 Add `AutoDiscover` and `ExcludePatterns` properties to MergeConfiguration + - Add `AutoDiscover` boolean property with default false + - Add `ExcludePatterns` list property with default empty list + - Add JSON serialization attributes + - _Requirements: 3.1, 3.2_ + + - [x] 1.2 Write property test for config compatibility + - **Property 2: Config Compatibility Round-Trip** + - **Validates: Requirements 2.3, 3.1** + + - [x] 1.3 Update CLI merge tool to support auto-discover + - Implement file discovery logic in MergeCommand + - Apply exclude patterns using glob matching + - Skip output file automatically + - _Requirements: 3.9_ + + - [x] 1.4 Write unit tests for CLI auto-discover + - Test auto-discover finds all JSON files + - Test exclude patterns are applied + - Test output file is excluded + - _Requirements: 3.2, 3.9_ + +- [x] 2. Checkpoint - Ensure base library changes work + - Ensure all tests pass, ask the user if questions arise. + +- [x] 3. Create Lambda project structure + - [x] 3.1 Create Oproto.Lambda.OpenApi.Merge.Lambda project + - Create project file with Lambda Annotations dependencies + - Add reference to Oproto.Lambda.OpenApi.Merge + - Add AWS SDK dependencies (S3, DynamoDB, CloudWatch) + - Create folder structure (Functions, Services, Models) + - _Requirements: 11.1, 11.3_ + + - [x] 3.2 Create data models + - Create LambdaMergeConfig extending MergeConfiguration + - Create MergeRequest record + - Create MergeResponse record + - Create MergeMetrics record + - _Requirements: 2.3_ + + - [x] 3.3 Write unit tests for data models + - Test LambdaMergeConfig deserialization + - Test default values + - _Requirements: 2.3_ + +- [x] 4. Implement S3 service layer + - [x] 4.1 Create IS3Service interface and S3Service implementation + - Implement ReadJsonAsync + - Implement ReadTextAsync + - Implement WriteJsonAsync + - Implement ListObjectsAsync + - Implement ExistsAsync + - _Requirements: 2.1, 3.4, 5.1_ + + - [x] 4.2 Write unit tests for S3Service + - Test JSON serialization/deserialization + - Test list objects filtering + - _Requirements: 2.1, 3.4_ + +- [x] 5. Implement config loader + - [x] 5.1 Create IConfigLoader interface and ConfigLoader implementation + - Load config from {prefix}/config.json + - Validate required fields + - Handle missing config error + - Handle invalid JSON error + - _Requirements: 2.1, 2.4, 2.5_ + + - [x] 5.2 Write property test for prefix extraction + - **Property 1: Prefix Extraction Consistency** + - **Validates: Requirements 1.1, 1.2, 1.3, 1.4** + + - [x] 5.3 Write unit tests for config loader + - Test valid config loading + - Test missing config error + - Test invalid JSON error + - _Requirements: 2.4, 2.5_ + +- [x] 6. Implement source discovery + - [x] 6.1 Create ISourceDiscovery interface and SourceDiscovery implementation + - Implement auto-discover mode (list and filter files) + - Implement explicit sources mode + - Apply exclude patterns + - Exclude config.json and output file + - _Requirements: 3.2, 3.3, 3.4_ + + - [x] 6.2 Write property test for auto-discovery filtering + - **Property 3: Auto-Discovery Filtering** + - **Validates: Requirements 3.2** + + - [x] 6.3 Write property test for explicit sources validation + - **Property 4: Explicit Sources Validation** + - **Validates: Requirements 3.3** + + - [x] 6.4 Write unit tests for source discovery + - Test auto-discover finds JSON files + - Test auto-discover excludes config.json + - Test auto-discover excludes output file + - Test exclude patterns work + - Test explicit sources mode + - _Requirements: 3.2, 3.3_ + +- [x] 7. Checkpoint - Ensure service layer works + - Ensure all tests pass, ask the user if questions arise. + +- [x] 8. Implement output comparison + - [x] 8.1 Create output comparison logic + - Implement JSON normalization (sorted keys, consistent formatting) + - Implement semantic comparison + - _Requirements: 5.2, 5.5_ + + - [x] 8.2 Write property test for output comparison normalization + - **Property 5: Output Comparison Normalization** + - **Validates: Requirements 5.2, 5.5** + + - [x] 8.3 Write property test for conditional write + - **Property 6: Conditional Write Correctness** + - **Validates: Requirements 5.3, 5.4** + +- [x] 9. Implement merge Lambda function + - [x] 9.1 Create MergeFunction with Lambda Annotations + - Implement HandleMerge method + - Wire up dependency injection + - Extract prefix from request + - Load config, discover sources, merge, compare, write + - Return MergeResponse with metrics + - _Requirements: 1.4, 4.1, 4.2, 4.3, 5.3, 5.4, 11.1, 11.2_ + + - [x] 9.2 Implement error handling + - Handle config not found + - Handle invalid config + - Handle no valid sources + - Handle merge conflicts + - Handle S3 errors + - _Requirements: 2.4, 2.5, 3.5, 3.6, 3.7, 4.4_ + + - [x] 9.3 Implement CloudWatch metrics emission + - Emit merge duration metric + - Emit success/failure count + - Emit files processed count + - _Requirements: 10.1, 10.4_ + + - [x] 9.4 Write unit tests for merge function + - Test successful merge flow + - Test error handling scenarios + - Test conditional write logic + - _Requirements: 4.1, 4.2, 5.3, 5.4_ + +- [x] 10. Checkpoint - Ensure Lambda function works + - Ensure all tests pass, ask the user if questions arise. + +- [x] 11. Create CDK construct project + - [x] 11.1 Create Oproto.Lambda.OpenApi.Merge.Cdk project + - Create project file with CDK dependencies + - Add reference to Lambda project for asset bundling + - _Requirements: 8.1_ + + - [x] 11.2 Create OpenApiMergeConstructProps + - Define all configurable properties + - Set sensible defaults + - _Requirements: 8.2, 8.3, 8.4, 8.6, 10.5, 10.7, 10.8, 10.9_ + + - [x] 11.3 Implement OpenApiMergeConstruct + - Create DynamoDB table for debounce state + - Create Lambda function with proper IAM role + - Create Step Functions state machine + - Create EventBridge rules for S3 events + - Configure S3 event notifications + - Create CloudWatch alarms (if enabled) + - Expose outputs (Lambda ARN, Step Function ARN) + - _Requirements: 8.5, 8.6, 8.7, 6.1, 6.2, 6.4_ + + - [x] 11.4 Write property test for output path construction + - **Property 7: Output Path Construction** + - **Validates: Requirements 6.3, 6.5** + +- [x] 12. Create Step Functions state machine definition + - [x] 12.1 Create state machine JSON with JSONata + - Implement prefix extraction + - Implement debounce logic with DynamoDB + - Implement post-merge event checking + - Handle merge Lambda invocation + - Implement cleanup + - _Requirements: 7.1, 7.2, 7.3, 7.4, 7.5, 7.6_ + +- [x] 13. Create CloudFormation template + - [x] 13.1 Create standalone CloudFormation template + - Define parameters for buckets and prefixes + - Create all resources matching CDK construct + - Add outputs for created resources + - _Requirements: 8a.1, 8a.2, 8a.3, 8a.4_ + +- [x] 14. Checkpoint - Ensure CDK and CloudFormation work + - Ensure all tests pass, ask the user if questions arise. + +- [x] 15. Update documentation + - [x] 15.1 Create docs/lambda-merge.md + - Document deployment options (CDK vs CloudFormation) + - Document config file format + - Include example configs + - Document debounce behavior + - Document troubleshooting + - _Requirements: 12.2, 12.3, 12.4, 12.5, 12.6_ + + - [x] 15.2 Update docs/merge-tool.md + - Document autoDiscover option + - Document excludePatterns option + - Add examples + - _Requirements: 12.7_ + + - [x] 15.3 Update CHANGELOG.md + - Document new Lambda merge tool + - Document autoDiscover and excludePatterns additions + - Document any breaking changes + - _Requirements: 12.1_ + + - [x] 15.4 Update README.md + - Add section for Lambda merge tool + - Link to detailed documentation + - _Requirements: 12.8_ + +- [x] 16. Final checkpoint - All tests pass and documentation complete + - Ensure all tests pass, ask the user if questions arise. + +## Notes + +- All tasks are required for comprehensive implementation +- Each task references specific requirements for traceability +- Checkpoints ensure incremental validation +- Property tests validate universal correctness properties +- Unit tests validate specific examples and edge cases +- The implementation uses C# consistent with the existing codebase +- FsCheck is used for property-based testing (already in use in the project) diff --git a/CHANGELOG.md b/CHANGELOG.md index 93c1727..2426114 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,31 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added +- **Lambda Merge Tool** + - New `Oproto.Lambda.OpenApi.Merge.Lambda` package for AWS Lambda-based OpenAPI merging + - Automatic merging triggered by S3 events (object created, modified, or deleted) + - Step Functions-based debouncing to batch rapid successive changes + - Configurable debounce wait duration (default: 5 seconds) + - Post-merge event checking to ensure no changes are missed during merge execution + - Conditional writes - only updates output when merged result differs from existing + - CloudWatch metrics: merge duration, success/failure counts, files processed + - Comprehensive error handling with detailed logging + +- **CDK Construct for Lambda Merge** + - New `Oproto.Lambda.OpenApi.Merge.Cdk` package with reusable CDK construct + - `OpenApiMergeConstruct` creates all required AWS resources + - Configurable CloudWatch alarms for merge failures + - Support for single-bucket or dual-bucket configurations + - Multi-API prefix support with single deployment + - Standalone CloudFormation template for non-CDK users + +- **Auto-Discovery Mode (Merge Tool)** + - New `autoDiscover` configuration option for automatic source file discovery + - When enabled, finds all `.json` files in the directory (excluding config and output) + - New `excludePatterns` option for glob-based file exclusion + - Supported in both CLI merge tool and Lambda merge tool + - Automatically excludes the output file to prevent circular merges + - **Deterministic Output** - OpenAPI output is now fully deterministic across multiple runs with identical input - Paths sorted alphabetically by path string diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/OpenApiMergeConstruct.cs b/Oproto.Lambda.OpenApi.Merge.Cdk/OpenApiMergeConstruct.cs new file mode 100644 index 0000000..92998f2 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/OpenApiMergeConstruct.cs @@ -0,0 +1,283 @@ +using Amazon.CDK; +using Amazon.CDK.AWS.CloudWatch; +using Amazon.CDK.AWS.CloudWatch.Actions; +using Amazon.CDK.AWS.DynamoDB; +using Amazon.CDK.AWS.Events; +using Amazon.CDK.AWS.Events.Targets; +using Amazon.CDK.AWS.IAM; +using Amazon.CDK.AWS.Lambda; +using Amazon.CDK.AWS.S3; +using Amazon.CDK.AWS.SNS; +using Amazon.CDK.AWS.StepFunctions; +using Constructs; + +namespace Oproto.Lambda.OpenApi.Merge.Cdk; + +/// +/// CDK construct that creates the OpenAPI merge infrastructure including: +/// - Lambda function for merging OpenAPI specs +/// - DynamoDB table for debounce state +/// - Step Functions state machine for debouncing +/// - EventBridge rules for S3 events +/// - CloudWatch alarms (optional) +/// +public class OpenApiMergeConstruct : Construct +{ + /// + /// The Lambda function that performs the merge operation. + /// + public IFunction MergeFunction { get; } + + /// + /// The Step Functions state machine that handles debouncing. + /// + public IStateMachine StateMachine { get; } + + /// + /// The DynamoDB table used for debounce state tracking. + /// + public ITable DebounceTable { get; } + + /// + /// The CloudWatch alarm for merge failures (if enabled). + /// + public IAlarm? FailureAlarm { get; } + + /// + /// The effective output bucket (either explicitly provided or same as input). + /// + public IBucket OutputBucket { get; } + + public OpenApiMergeConstruct(Construct scope, string id, OpenApiMergeConstructProps props) + : base(scope, id) + { + ValidateProps(props); + + OutputBucket = props.OutputBucket ?? props.InputBucket; + + // Create DynamoDB table for debounce state + DebounceTable = CreateDebounceTable(props); + + // Create Lambda function + MergeFunction = CreateMergeFunction(props); + + // Create Step Functions state machine + StateMachine = CreateStateMachine(props); + + // Create EventBridge rules for S3 events + CreateEventBridgeRules(props); + + // Create CloudWatch alarms if enabled + if (props.EnableAlarms) + { + FailureAlarm = CreateFailureAlarm(props); + } + + // Create outputs + CreateOutputs(); + } + + private static void ValidateProps(OpenApiMergeConstructProps props) + { + if (props.ApiPrefixes == null || props.ApiPrefixes.Count == 0) + { + throw new ArgumentException("At least one API prefix must be specified", nameof(props)); + } + + if (props.DebounceSeconds < 1) + { + throw new ArgumentException("DebounceSeconds must be at least 1", nameof(props)); + } + + if (props.MemorySize < 128 || props.MemorySize > 10240) + { + throw new ArgumentException("MemorySize must be between 128 and 10240 MB", nameof(props)); + } + + if (props.TimeoutSeconds < 1 || props.TimeoutSeconds > 900) + { + throw new ArgumentException("TimeoutSeconds must be between 1 and 900", nameof(props)); + } + } + + private Table CreateDebounceTable(OpenApiMergeConstructProps props) + { + return new Table(this, "DebounceTable", new TableProps + { + TableName = props.DebounceTableName, + PartitionKey = new Amazon.CDK.AWS.DynamoDB.Attribute + { + Name = "prefix", + Type = AttributeType.STRING + }, + BillingMode = BillingMode.PAY_PER_REQUEST, + RemovalPolicy = RemovalPolicy.DESTROY, + TimeToLiveAttribute = "ttl" + }); + } + + private Function CreateMergeFunction(OpenApiMergeConstructProps props) + { + var function = new Function(this, "MergeFunction", new FunctionProps + { + FunctionName = props.FunctionName, + Runtime = Runtime.DOTNET_8, + Handler = "Oproto.Lambda.OpenApi.Merge.Lambda::Oproto.Lambda.OpenApi.Merge.Lambda.Functions.MergeFunction_HandleMerge_Generated::HandleMerge", + Code = Code.FromAsset(GetLambdaAssetPath()), + MemorySize = props.MemorySize, + Timeout = Duration.Seconds(props.TimeoutSeconds), + Environment = new Dictionary + { + ["OUTPUT_BUCKET"] = OutputBucket.BucketName + }, + Tracing = Tracing.ACTIVE + }); + + // Grant S3 permissions + props.InputBucket.GrantRead(function); + OutputBucket.GrantReadWrite(function); + + // Grant CloudWatch metrics permissions + function.AddToRolePolicy(new PolicyStatement(new PolicyStatementProps + { + Actions = new[] { "cloudwatch:PutMetricData" }, + Resources = new[] { "*" } + })); + + return function; + } + + private StateMachine CreateStateMachine(OpenApiMergeConstructProps props) + { + var stateMachineDefinition = CreateStateMachineDefinition(props); + + var stateMachine = new StateMachine(this, "DebounceStateMachine", new StateMachineProps + { + StateMachineName = props.StateMachineName, + DefinitionBody = DefinitionBody.FromString(stateMachineDefinition), + StateMachineType = StateMachineType.STANDARD, + Timeout = Duration.Minutes(10), + TracingEnabled = true + }); + + // Grant DynamoDB permissions to state machine + DebounceTable.GrantReadWriteData(stateMachine); + + // Grant Lambda invoke permissions to state machine + MergeFunction.GrantInvoke(stateMachine); + + return stateMachine; + } + + private string CreateStateMachineDefinition(OpenApiMergeConstructProps props) + { + var substitutions = StateMachineDefinitionLoader.CreateSubstitutions( + debounceTableName: DebounceTable.TableName, + mergeFunctionArn: MergeFunction.FunctionArn, + outputBucket: OutputBucket.BucketName, + debounceSeconds: props.DebounceSeconds + ); + + return StateMachineDefinitionLoader.LoadDefinition(substitutions); + } + + private void CreateEventBridgeRules(OpenApiMergeConstructProps props) + { + foreach (var prefix in props.ApiPrefixes) + { + var normalizedPrefix = prefix.TrimEnd('/'); + var ruleName = $"OpenApiMerge-{normalizedPrefix.Replace("/", "-")}"; + + var rule = new Rule(this, $"S3EventRule-{normalizedPrefix.Replace("/", "-")}", new RuleProps + { + RuleName = ruleName.Length > 64 ? ruleName.Substring(0, 64) : ruleName, + EventPattern = new EventPattern + { + Source = new[] { "aws.s3" }, + DetailType = new[] { "Object Created", "Object Deleted" }, + Detail = new Dictionary + { + ["bucket"] = new Dictionary + { + ["name"] = new[] { props.InputBucket.BucketName } + }, + ["object"] = new Dictionary + { + ["key"] = new[] { new Dictionary { ["prefix"] = normalizedPrefix + "/" } } + } + } + } + }); + + rule.AddTarget(new SfnStateMachine(StateMachine)); + } + } + + private Alarm CreateFailureAlarm(OpenApiMergeConstructProps props) + { + var metric = new Metric(new MetricProps + { + Namespace = "OpenApiMerge", + MetricName = "MergeFailures", + Statistic = "Sum", + Period = Duration.Minutes(5) + }); + + var alarm = new Alarm(this, "MergeFailureAlarm", new AlarmProps + { + AlarmName = $"OpenApiMerge-Failures", + AlarmDescription = "Alarm when OpenAPI merge operations fail", + Metric = metric, + Threshold = props.AlarmThreshold, + EvaluationPeriods = props.AlarmEvaluationPeriods, + ComparisonOperator = ComparisonOperator.GREATER_THAN_OR_EQUAL_TO_THRESHOLD, + TreatMissingData = TreatMissingData.NOT_BREACHING + }); + + if (props.AlarmTopic != null) + { + alarm.AddAlarmAction(new SnsAction(props.AlarmTopic)); + } + + return alarm; + } + + private void CreateOutputs() + { + _ = new CfnOutput(this, "MergeFunctionArn", new CfnOutputProps + { + Value = MergeFunction.FunctionArn, + Description = "ARN of the OpenAPI merge Lambda function" + }); + + _ = new CfnOutput(this, "StateMachineArn", new CfnOutputProps + { + Value = StateMachine.StateMachineArn, + Description = "ARN of the debounce Step Functions state machine" + }); + + _ = new CfnOutput(this, "DebounceTableName", new CfnOutputProps + { + Value = DebounceTable.TableName, + Description = "Name of the DynamoDB debounce table" + }); + } + + private static string GetLambdaAssetPath() + { + // This returns the path to the Lambda project for bundling + // In a real deployment, this would be the published output directory + return Path.Combine( + Path.GetDirectoryName(typeof(OpenApiMergeConstruct).Assembly.Location) ?? ".", + "..", + "..", + "..", + "..", + "Oproto.Lambda.OpenApi.Merge.Lambda", + "bin", + "Release", + "net8.0", + "publish" + ); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/OpenApiMergeConstructProps.cs b/Oproto.Lambda.OpenApi.Merge.Cdk/OpenApiMergeConstructProps.cs new file mode 100644 index 0000000..255defe --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/OpenApiMergeConstructProps.cs @@ -0,0 +1,82 @@ +using Amazon.CDK.AWS.S3; +using Amazon.CDK.AWS.SNS; + +namespace Oproto.Lambda.OpenApi.Merge.Cdk; + +/// +/// Properties for configuring the OpenAPI merge construct. +/// +public class OpenApiMergeConstructProps +{ + /// + /// The S3 bucket containing input files (config and source specs). Required. + /// + public required IBucket InputBucket { get; init; } + + /// + /// The S3 bucket for output files. Defaults to InputBucket if not specified. + /// + public IBucket? OutputBucket { get; init; } + + /// + /// List of API prefixes to monitor for changes (e.g., "publicapi/", "internalapi/"). + /// Each prefix should end with a forward slash. + /// + public required IReadOnlyList ApiPrefixes { get; init; } + + /// + /// Debounce wait duration in seconds. Default: 5. + /// This is the time to wait after the last S3 event before triggering a merge. + /// + public int DebounceSeconds { get; init; } = 5; + + /// + /// Whether to create CloudWatch alarms for merge failures. Default: true. + /// + public bool EnableAlarms { get; init; } = true; + + /// + /// Failure count threshold for CloudWatch alarms. Default: 1. + /// An alarm will trigger when failures exceed this threshold. + /// + public int AlarmThreshold { get; init; } = 1; + + /// + /// Number of evaluation periods for CloudWatch alarms. Default: 1. + /// + public int AlarmEvaluationPeriods { get; init; } = 1; + + /// + /// Optional SNS topic for alarm notifications. + /// If not specified, alarms will be created without notification actions. + /// + public ITopic? AlarmTopic { get; init; } + + /// + /// Lambda memory size in MB. Default: 512. + /// + public int MemorySize { get; init; } = 512; + + /// + /// Lambda timeout in seconds. Default: 60. + /// + public int TimeoutSeconds { get; init; } = 60; + + /// + /// Optional custom name for the Lambda function. + /// If not specified, CDK will generate a unique name. + /// + public string? FunctionName { get; init; } + + /// + /// Optional custom name for the Step Functions state machine. + /// If not specified, CDK will generate a unique name. + /// + public string? StateMachineName { get; init; } + + /// + /// Optional custom name for the DynamoDB debounce table. + /// If not specified, CDK will generate a unique name. + /// + public string? DebounceTableName { get; init; } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/Oproto.Lambda.OpenApi.Merge.Cdk.csproj b/Oproto.Lambda.OpenApi.Merge.Cdk/Oproto.Lambda.OpenApi.Merge.Cdk.csproj new file mode 100644 index 0000000..abf4894 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/Oproto.Lambda.OpenApi.Merge.Cdk.csproj @@ -0,0 +1,38 @@ + + + + net8.0 + enable + enable + 12 + + + true + Oproto.Lambda.OpenApi.Merge.Cdk + AWS CDK construct for deploying the OpenAPI merge Lambda function with S3 event triggers and Step Functions debouncing. + README.md + + + true + $(NoWarn);CS1591 + + + + + + + + + + + + + + + + + + + + + diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/OutputPathHelper.cs b/Oproto.Lambda.OpenApi.Merge.Cdk/OutputPathHelper.cs new file mode 100644 index 0000000..515840f --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/OutputPathHelper.cs @@ -0,0 +1,68 @@ +namespace Oproto.Lambda.OpenApi.Merge.Cdk; + +/// +/// Helper class for constructing output paths for merged OpenAPI specs. +/// +public static class OutputPathHelper +{ + /// + /// Constructs the full output path for a merged OpenAPI spec. + /// If outputFilename starts with '/' or contains '/', it's treated as an absolute/full path. + /// Otherwise, it's relative to the prefix. + /// + /// The API prefix (e.g., "publicapi/") + /// The output filename or path from config + /// The full S3 key for the output file + public static string ConstructOutputPath(string prefix, string outputFilename) + { + if (string.IsNullOrWhiteSpace(outputFilename)) + { + throw new ArgumentException("Output filename cannot be null or empty", nameof(outputFilename)); + } + + // If output starts with '/', treat it as absolute (remove leading slash for S3) + if (outputFilename.StartsWith("/")) + { + return outputFilename.TrimStart('/'); + } + + // If output contains '/', treat it as a full path (not relative to prefix) + if (outputFilename.Contains("/")) + { + return outputFilename; + } + + // Otherwise, it's a simple filename relative to the prefix + if (string.IsNullOrWhiteSpace(prefix)) + { + return outputFilename; + } + + // Normalize prefix to ensure it ends with / + var normalizedPrefix = prefix.TrimEnd('/') + "/"; + + return normalizedPrefix + outputFilename; + } + + /// + /// Extracts the prefix from an S3 key. + /// + /// The full S3 key (e.g., "publicapi/service.json") + /// The prefix portion (e.g., "publicapi/") + public static string ExtractPrefix(string key) + { + if (string.IsNullOrWhiteSpace(key)) + { + throw new ArgumentException("Key cannot be null or empty", nameof(key)); + } + + var lastSlashIndex = key.LastIndexOf('/'); + if (lastSlashIndex < 0) + { + // No slash found, return empty prefix + return string.Empty; + } + + return key.Substring(0, lastSlashIndex + 1); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/README.md b/Oproto.Lambda.OpenApi.Merge.Cdk/README.md new file mode 100644 index 0000000..2cc5e9f --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/README.md @@ -0,0 +1,102 @@ +# Oproto.Lambda.OpenApi.Merge.Cdk + +AWS CDK construct for deploying the OpenAPI merge Lambda function with S3 event triggers and Step Functions debouncing. + +## Installation + +### CDK (Recommended) + +```bash +dotnet add package Oproto.Lambda.OpenApi.Merge.Cdk +``` + +### CloudFormation (Alternative) + +For users who don't use CDK, a standalone CloudFormation template is available at `cloudformation/openapi-merge.yaml`. + +## Quick Start + +### Using CDK + +```csharp +using Amazon.CDK; +using Amazon.CDK.AWS.S3; +using Oproto.Lambda.OpenApi.Merge.Cdk; + +var bucket = new Bucket(this, "ApiBucket"); + +var mergeConstruct = new OpenApiMergeConstruct(this, "OpenApiMerge", new OpenApiMergeConstructProps +{ + InputBucket = bucket, + ApiPrefixes = new[] { "publicapi/", "internalapi/" }, + DebounceSeconds = 5, + EnableAlarms = true +}); +``` + +### Using CloudFormation + +1. First, build and package the Lambda function: + ```bash + dotnet publish Oproto.Lambda.OpenApi.Merge.Lambda -c Release -o ./publish + cd publish && zip -r ../lambda-package.zip . && cd .. + ``` + +2. Upload the package to S3: + ```bash + aws s3 cp lambda-package.zip s3://your-deployment-bucket/openapi-merge/lambda-package.zip + ``` + +3. Deploy the CloudFormation stack: + ```bash + aws cloudformation create-stack \ + --stack-name openapi-merge \ + --template-body file://Oproto.Lambda.OpenApi.Merge.Cdk/cloudformation/openapi-merge.yaml \ + --parameters \ + ParameterKey=InputBucketName,ParameterValue=your-api-specs-bucket \ + ParameterKey=LambdaCodeS3Bucket,ParameterValue=your-deployment-bucket \ + ParameterKey=LambdaCodeS3Key,ParameterValue=openapi-merge/lambda-package.zip \ + --capabilities CAPABILITY_NAMED_IAM + ``` + +## Features + +- Automatic OpenAPI spec merging on S3 file changes +- Step Functions-based debouncing to batch rapid changes +- Configurable CloudWatch alarms +- Support for single-bucket or dual-bucket configurations +- Multi-API prefix support with single deployment + +## CloudFormation Parameters + +| Parameter | Description | Default | +|-----------|-------------|---------| +| `InputBucketName` | S3 bucket containing input files (required) | - | +| `OutputBucketName` | S3 bucket for output files (optional, defaults to input bucket) | '' | +| `ApiPrefixes` | Comma-separated list of API prefixes to monitor | '' | +| `LambdaCodeS3Bucket` | S3 bucket containing the Lambda deployment package (required) | - | +| `LambdaCodeS3Key` | S3 key for the Lambda deployment package (required) | - | +| `MemorySize` | Lambda memory size in MB | 512 | +| `TimeoutSeconds` | Lambda timeout in seconds | 60 | +| `DebounceSeconds` | Debounce wait time in seconds | 5 | +| `EnableAlarms` | Whether to create CloudWatch alarms | 'true' | +| `AlarmThreshold` | Failure count threshold for alarms | 1 | +| `AlarmEvaluationPeriods` | Number of evaluation periods for alarms | 1 | +| `AlarmSnsTopicArn` | Optional SNS topic ARN for alarm notifications | '' | + +## CloudFormation Outputs + +| Output | Description | +|--------|-------------| +| `MergeFunctionArn` | ARN of the OpenAPI merge Lambda function | +| `MergeFunctionName` | Name of the Lambda function | +| `StateMachineArn` | ARN of the debounce Step Functions state machine | +| `StateMachineName` | Name of the state machine | +| `DebounceTableName` | Name of the DynamoDB debounce table | +| `DebounceTableArn` | ARN of the DynamoDB table | +| `EventBridgeRuleArn` | ARN of the EventBridge rule for S3 events | +| `OutputBucketName` | Name of the output S3 bucket | + +## Documentation + +See the [full documentation](https://github.com/oproto/lambda-openapi/blob/main/docs/lambda-merge.md) for detailed usage instructions. diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/StateMachineDefinitionLoader.cs b/Oproto.Lambda.OpenApi.Merge.Cdk/StateMachineDefinitionLoader.cs new file mode 100644 index 0000000..eaefc28 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/StateMachineDefinitionLoader.cs @@ -0,0 +1,84 @@ +using System.Reflection; +using System.Text.RegularExpressions; + +namespace Oproto.Lambda.OpenApi.Merge.Cdk; + +/// +/// Loads and processes the Step Functions state machine definition. +/// +public static class StateMachineDefinitionLoader +{ + private const string StateMachineResourceName = "Oproto.Lambda.OpenApi.Merge.Cdk.StateMachines.debounce-state-machine.json"; + + /// + /// Loads the state machine definition from embedded resources and substitutes placeholders. + /// + /// Dictionary of placeholder names to values (without ${} wrapper). + /// The state machine definition JSON with substitutions applied. + public static string LoadDefinition(IDictionary substitutions) + { + var definition = LoadEmbeddedResource(); + return ApplySubstitutions(definition, substitutions); + } + + /// + /// Loads the raw state machine definition from embedded resources. + /// + /// The raw state machine definition JSON. + public static string LoadRawDefinition() + { + return LoadEmbeddedResource(); + } + + private static string LoadEmbeddedResource() + { + var assembly = Assembly.GetExecutingAssembly(); + using var stream = assembly.GetManifestResourceStream(StateMachineResourceName); + + if (stream == null) + { + throw new InvalidOperationException( + $"Could not find embedded resource '{StateMachineResourceName}'. " + + "Ensure the state machine JSON file is included as an embedded resource."); + } + + using var reader = new StreamReader(stream); + return reader.ReadToEnd(); + } + + private static string ApplySubstitutions(string definition, IDictionary substitutions) + { + var result = definition; + + foreach (var (key, value) in substitutions) + { + // Replace ${Key} with the value + result = result.Replace($"${{{key}}}", value); + } + + return result; + } + + /// + /// Creates the standard substitutions dictionary for the state machine. + /// + /// The DynamoDB table name for debounce state. + /// The ARN of the merge Lambda function. + /// The name of the output S3 bucket. + /// The debounce wait duration in seconds. + /// A dictionary of substitutions. + public static IDictionary CreateSubstitutions( + string debounceTableName, + string mergeFunctionArn, + string outputBucket, + int debounceSeconds) + { + return new Dictionary + { + ["DebounceTable"] = debounceTableName, + ["MergeFunctionArn"] = mergeFunctionArn, + ["OutputBucket"] = outputBucket, + ["DebounceSeconds"] = debounceSeconds.ToString() + }; + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/StateMachines/debounce-state-machine.json b/Oproto.Lambda.OpenApi.Merge.Cdk/StateMachines/debounce-state-machine.json new file mode 100644 index 0000000..5c83f1c --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/StateMachines/debounce-state-machine.json @@ -0,0 +1,206 @@ +{ + "Comment": "Debounce OpenAPI merge operations per API prefix with post-merge event checking", + "QueryLanguage": "JSONata", + "StartAt": "ExtractPrefix", + "States": { + "ExtractPrefix": { + "Type": "Pass", + "Output": { + "prefix": "{% $join($filter($split($states.input.detail.object.key, '/'), function($v, $i, $a) { $i < $count($a) - 1 }), '/') & '/' %}", + "bucket": "{% $states.input.detail.bucket.name %}", + "eventTime": "{% $states.input.time %}" + }, + "Next": "CheckExistingExecution" + }, + "CheckExistingExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "eventTime": "{% $states.input.eventTime %}", + "exists": "{% $exists($states.result.Item) %}", + "existingExecutionId": "{% $states.result.Item.executionId.S %}" + }, + "Next": "BranchOnExisting" + }, + "BranchOnExisting": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.exists = true %}", + "Next": "UpdateEventTimestamp" + } + ], + "Default": "CreateExecution" + }, + "UpdateEventTimestamp": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:updateItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + }, + "UpdateExpression": "SET lastEventTime = :ts", + "ExpressionAttributeValues": { + ":ts": { "S": "{% $states.input.eventTime %}" } + } + }, + "Comment": "Update timestamp and exit - existing execution will handle", + "End": true + }, + "CreateExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:putItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Item": { + "prefix": { "S": "{% $states.input.prefix %}" }, + "executionId": { "S": "{% $states.context.Execution.Id %}" }, + "lastEventTime": { "S": "{% $states.input.eventTime %}" }, + "ttl": { "N": "{% $string($floor(($toMillis($now()) / 1000) + 300)) %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}" + }, + "Next": "WaitForDebounce" + }, + "WaitForDebounce": { + "Type": "Wait", + "Seconds": "${DebounceSeconds}", + "Next": "CheckForNewerEvents" + }, + "CheckForNewerEvents": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "isOwner": "{% $states.result.Item.executionId.S = $states.context.Execution.Id %}", + "lastEventTime": "{% $states.result.Item.lastEventTime.S %}", + "mergeStartTime": "{% $states.result.Item.mergeStartTime.S %}" + }, + "Next": "BranchOnOwnership" + }, + "BranchOnOwnership": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.isOwner = false %}", + "Next": "AbandonExecution" + } + ], + "Default": "RecordMergeStart" + }, + "AbandonExecution": { + "Type": "Succeed", + "Comment": "Another execution took over" + }, + "RecordMergeStart": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:updateItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + }, + "UpdateExpression": "SET mergeStartTime = :ts", + "ExpressionAttributeValues": { + ":ts": { "S": "{% $now() %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $now() %}" + }, + "Next": "InvokeMergeLambda" + }, + "InvokeMergeLambda": { + "Type": "Task", + "Resource": "arn:aws:states:::lambda:invoke", + "Arguments": { + "FunctionName": "${MergeFunctionArn}", + "Payload": { + "inputBucket": "{% $states.input.bucket %}", + "prefix": "{% $states.input.prefix %}", + "outputBucket": "${OutputBucket}" + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "mergeResult": "{% $states.result.Payload %}" + }, + "Next": "CheckPostMergeEvents", + "Catch": [ + { + "ErrorEquals": ["States.ALL"], + "Next": "CheckPostMergeEvents", + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "mergeError": "{% $states.error %}" + } + } + ] + }, + "CheckPostMergeEvents": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "lastEventTime": "{% $states.result.Item.lastEventTime.S %}", + "hasNewerEvents": "{% $toMillis($states.result.Item.lastEventTime.S) > $toMillis($states.input.mergeStartTime) %}" + }, + "Next": "BranchOnPostMergeEvents" + }, + "BranchOnPostMergeEvents": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.hasNewerEvents = true %}", + "Next": "WaitForDebounce", + "Comment": "Events arrived during merge - loop back" + } + ], + "Default": "CleanupExecution" + }, + "CleanupExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:deleteItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "End": true + } + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Cdk/cloudformation/openapi-merge.yaml b/Oproto.Lambda.OpenApi.Merge.Cdk/cloudformation/openapi-merge.yaml new file mode 100644 index 0000000..fd2c31a --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Cdk/cloudformation/openapi-merge.yaml @@ -0,0 +1,635 @@ +AWSTemplateFormatVersion: '2010-09-09' +Description: | + OpenAPI Merge Lambda - Automatically merges OpenAPI specification files when changes are detected in S3. + This template creates a Lambda function, Step Functions state machine for debouncing, DynamoDB table for state tracking, + and EventBridge rules for S3 event notifications. + +Metadata: + AWS::CloudFormation::Interface: + ParameterGroups: + - Label: + default: S3 Configuration + Parameters: + - InputBucketName + - OutputBucketName + - ApiPrefixes + - Label: + default: Lambda Configuration + Parameters: + - LambdaCodeS3Bucket + - LambdaCodeS3Key + - MemorySize + - TimeoutSeconds + - Label: + default: Debounce Configuration + Parameters: + - DebounceSeconds + - Label: + default: Monitoring Configuration + Parameters: + - EnableAlarms + - AlarmThreshold + - AlarmEvaluationPeriods + - AlarmSnsTopicArn + ParameterLabels: + InputBucketName: + default: Input S3 Bucket Name + OutputBucketName: + default: Output S3 Bucket Name (optional) + ApiPrefixes: + default: API Prefixes (comma-separated) + LambdaCodeS3Bucket: + default: Lambda Code S3 Bucket + LambdaCodeS3Key: + default: Lambda Code S3 Key + MemorySize: + default: Lambda Memory Size (MB) + TimeoutSeconds: + default: Lambda Timeout (seconds) + DebounceSeconds: + default: Debounce Wait Time (seconds) + EnableAlarms: + default: Enable CloudWatch Alarms + AlarmThreshold: + default: Alarm Failure Threshold + AlarmEvaluationPeriods: + default: Alarm Evaluation Periods + AlarmSnsTopicArn: + default: SNS Topic ARN for Alarms (optional) + +Parameters: + InputBucketName: + Type: String + Description: Name of the S3 bucket containing input files (config and source OpenAPI specs) + MinLength: 3 + MaxLength: 63 + + OutputBucketName: + Type: String + Description: Name of the S3 bucket for output files. Leave empty to use the same bucket as input. + Default: '' + + ApiPrefixes: + Type: CommaDelimitedList + Description: | + Comma-separated list of API prefixes to monitor (e.g., "publicapi/,internalapi/"). + Each prefix should end with a forward slash. + Default: '' + + LambdaCodeS3Bucket: + Type: String + Description: S3 bucket containing the Lambda deployment package + MinLength: 3 + MaxLength: 63 + + LambdaCodeS3Key: + Type: String + Description: S3 key for the Lambda deployment package (ZIP file) + MinLength: 1 + + MemorySize: + Type: Number + Description: Lambda function memory size in MB + Default: 512 + MinValue: 128 + MaxValue: 10240 + + TimeoutSeconds: + Type: Number + Description: Lambda function timeout in seconds + Default: 60 + MinValue: 1 + MaxValue: 900 + + DebounceSeconds: + Type: Number + Description: Time to wait after the last S3 event before triggering a merge + Default: 5 + MinValue: 1 + MaxValue: 300 + + EnableAlarms: + Type: String + Description: Whether to create CloudWatch alarms for merge failures + Default: 'true' + AllowedValues: + - 'true' + - 'false' + + AlarmThreshold: + Type: Number + Description: Number of failures that triggers the alarm + Default: 1 + MinValue: 1 + + AlarmEvaluationPeriods: + Type: Number + Description: Number of periods to evaluate for the alarm + Default: 1 + MinValue: 1 + + AlarmSnsTopicArn: + Type: String + Description: Optional SNS topic ARN for alarm notifications + Default: '' + +Conditions: + UseInputBucketAsOutput: !Equals [!Ref OutputBucketName, ''] + CreateAlarms: !Equals [!Ref EnableAlarms, 'true'] + HasAlarmTopic: !Not [!Equals [!Ref AlarmSnsTopicArn, '']] + CreateAlarmWithTopic: !And [!Condition CreateAlarms, !Condition HasAlarmTopic] + + +Resources: + # DynamoDB Table for Debounce State + DebounceTable: + Type: AWS::DynamoDB::Table + Properties: + TableName: !Sub '${AWS::StackName}-debounce' + AttributeDefinitions: + - AttributeName: prefix + AttributeType: S + KeySchema: + - AttributeName: prefix + KeyType: HASH + BillingMode: PAY_PER_REQUEST + TimeToLiveSpecification: + AttributeName: ttl + Enabled: true + Tags: + - Key: Application + Value: OpenApiMerge + + # IAM Role for Lambda Function + MergeFunctionRole: + Type: AWS::IAM::Role + Properties: + RoleName: !Sub '${AWS::StackName}-lambda-role' + AssumeRolePolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Principal: + Service: lambda.amazonaws.com + Action: sts:AssumeRole + ManagedPolicyArns: + - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole + - arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess + Policies: + - PolicyName: S3Access + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - s3:GetObject + - s3:ListBucket + Resource: + - !Sub 'arn:aws:s3:::${InputBucketName}' + - !Sub 'arn:aws:s3:::${InputBucketName}/*' + - Effect: Allow + Action: + - s3:GetObject + - s3:PutObject + - s3:ListBucket + Resource: + - !If + - UseInputBucketAsOutput + - !Sub 'arn:aws:s3:::${InputBucketName}' + - !Sub 'arn:aws:s3:::${OutputBucketName}' + - !If + - UseInputBucketAsOutput + - !Sub 'arn:aws:s3:::${InputBucketName}/*' + - !Sub 'arn:aws:s3:::${OutputBucketName}/*' + - PolicyName: CloudWatchMetrics + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - cloudwatch:PutMetricData + Resource: '*' + + # Lambda Function + MergeFunction: + Type: AWS::Lambda::Function + Properties: + FunctionName: !Sub '${AWS::StackName}-merge' + Description: OpenAPI specification merge function + Runtime: dotnet8 + Handler: Oproto.Lambda.OpenApi.Merge.Lambda::Oproto.Lambda.OpenApi.Merge.Lambda.Functions.MergeFunction_HandleMerge_Generated::HandleMerge + Code: + S3Bucket: !Ref LambdaCodeS3Bucket + S3Key: !Ref LambdaCodeS3Key + MemorySize: !Ref MemorySize + Timeout: !Ref TimeoutSeconds + Role: !GetAtt MergeFunctionRole.Arn + Environment: + Variables: + OUTPUT_BUCKET: !If [UseInputBucketAsOutput, !Ref InputBucketName, !Ref OutputBucketName] + TracingConfig: + Mode: Active + Tags: + - Key: Application + Value: OpenApiMerge + + # IAM Role for Step Functions State Machine + StateMachineRole: + Type: AWS::IAM::Role + Properties: + RoleName: !Sub '${AWS::StackName}-sfn-role' + AssumeRolePolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Principal: + Service: states.amazonaws.com + Action: sts:AssumeRole + Policies: + - PolicyName: DynamoDBAccess + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - dynamodb:GetItem + - dynamodb:PutItem + - dynamodb:UpdateItem + - dynamodb:DeleteItem + Resource: !GetAtt DebounceTable.Arn + - PolicyName: LambdaInvoke + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - lambda:InvokeFunction + Resource: !GetAtt MergeFunction.Arn + - PolicyName: XRayAccess + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - xray:PutTraceSegments + - xray:PutTelemetryRecords + - xray:GetSamplingRules + - xray:GetSamplingTargets + Resource: '*' + - PolicyName: CloudWatchLogs + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - logs:CreateLogDelivery + - logs:GetLogDelivery + - logs:UpdateLogDelivery + - logs:DeleteLogDelivery + - logs:ListLogDeliveries + - logs:PutResourcePolicy + - logs:DescribeResourcePolicies + - logs:DescribeLogGroups + Resource: '*' + + + # Step Functions State Machine + DebounceStateMachine: + Type: AWS::StepFunctions::StateMachine + Properties: + StateMachineName: !Sub '${AWS::StackName}-debounce' + StateMachineType: STANDARD + RoleArn: !GetAtt StateMachineRole.Arn + TracingConfiguration: + Enabled: true + DefinitionSubstitutions: + DebounceTable: !Ref DebounceTable + MergeFunction: !GetAtt MergeFunction.Arn + DebounceSeconds: !Ref DebounceSeconds + OutputBucketResolved: !If [UseInputBucketAsOutput, !Ref InputBucketName, !Ref OutputBucketName] + DefinitionString: | + { + "Comment": "Debounce OpenAPI merge operations per API prefix with post-merge event checking", + "QueryLanguage": "JSONata", + "StartAt": "ExtractPrefix", + "States": { + "ExtractPrefix": { + "Type": "Pass", + "Output": { + "prefix": "{% $join($filter($split($states.input.detail.object.key, '/'), function($v, $i, $a) { $i < $count($a) - 1 }), '/') & '/' %}", + "bucket": "{% $states.input.detail.bucket.name %}", + "eventTime": "{% $states.input.time %}" + }, + "Next": "CheckExistingExecution" + }, + "CheckExistingExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "eventTime": "{% $states.input.eventTime %}", + "exists": "{% $exists($states.result.Item) %}", + "existingExecutionId": "{% $states.result.Item.executionId.S %}" + }, + "Next": "BranchOnExisting" + }, + "BranchOnExisting": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.exists = true %}", + "Next": "UpdateEventTimestamp" + } + ], + "Default": "CreateExecution" + }, + "UpdateEventTimestamp": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:updateItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + }, + "UpdateExpression": "SET lastEventTime = :ts", + "ExpressionAttributeValues": { + ":ts": { "S": "{% $states.input.eventTime %}" } + } + }, + "Comment": "Update timestamp and exit - existing execution will handle", + "End": true + }, + "CreateExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:putItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Item": { + "prefix": { "S": "{% $states.input.prefix %}" }, + "executionId": { "S": "{% $states.context.Execution.Id %}" }, + "lastEventTime": { "S": "{% $states.input.eventTime %}" }, + "ttl": { "N": "{% $string($floor(($toMillis($now()) / 1000) + 300)) %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}" + }, + "Next": "WaitForDebounce" + }, + "WaitForDebounce": { + "Type": "Wait", + "Seconds": ${DebounceSeconds}, + "Next": "CheckForNewerEvents" + }, + "CheckForNewerEvents": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "isOwner": "{% $states.result.Item.executionId.S = $states.context.Execution.Id %}", + "lastEventTime": "{% $states.result.Item.lastEventTime.S %}", + "mergeStartTime": "{% $states.result.Item.mergeStartTime.S %}" + }, + "Next": "BranchOnOwnership" + }, + "BranchOnOwnership": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.isOwner = false %}", + "Next": "AbandonExecution" + } + ], + "Default": "RecordMergeStart" + }, + "AbandonExecution": { + "Type": "Succeed", + "Comment": "Another execution took over" + }, + "RecordMergeStart": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:updateItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + }, + "UpdateExpression": "SET mergeStartTime = :ts", + "ExpressionAttributeValues": { + ":ts": { "S": "{% $now() %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $now() %}" + }, + "Next": "InvokeMergeLambda" + }, + "InvokeMergeLambda": { + "Type": "Task", + "Resource": "arn:aws:states:::lambda:invoke", + "Arguments": { + "FunctionName": "${MergeFunction}", + "Payload": { + "inputBucket": "{% $states.input.bucket %}", + "prefix": "{% $states.input.prefix %}", + "outputBucket": "${OutputBucketResolved}" + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "mergeResult": "{% $states.result.Payload %}" + }, + "Next": "CheckPostMergeEvents", + "Catch": [ + { + "ErrorEquals": ["States.ALL"], + "Next": "CheckPostMergeEvents", + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "mergeError": "{% $states.error %}" + } + } + ] + }, + "CheckPostMergeEvents": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:getItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "Output": { + "prefix": "{% $states.input.prefix %}", + "bucket": "{% $states.input.bucket %}", + "mergeStartTime": "{% $states.input.mergeStartTime %}", + "lastEventTime": "{% $states.result.Item.lastEventTime.S %}", + "hasNewerEvents": "{% $toMillis($states.result.Item.lastEventTime.S) > $toMillis($states.input.mergeStartTime) %}" + }, + "Next": "BranchOnPostMergeEvents" + }, + "BranchOnPostMergeEvents": { + "Type": "Choice", + "Choices": [ + { + "Condition": "{% $states.input.hasNewerEvents = true %}", + "Next": "WaitForDebounce", + "Comment": "Events arrived during merge - loop back" + } + ], + "Default": "CleanupExecution" + }, + "CleanupExecution": { + "Type": "Task", + "Resource": "arn:aws:states:::dynamodb:deleteItem", + "Arguments": { + "TableName": "${DebounceTable}", + "Key": { + "prefix": { "S": "{% $states.input.prefix %}" } + } + }, + "End": true + } + } + } + Tags: + - Key: Application + Value: OpenApiMerge + + + # IAM Role for EventBridge to invoke Step Functions + EventBridgeRole: + Type: AWS::IAM::Role + Properties: + RoleName: !Sub '${AWS::StackName}-eventbridge-role' + AssumeRolePolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Principal: + Service: events.amazonaws.com + Action: sts:AssumeRole + Policies: + - PolicyName: InvokeStateMachine + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - states:StartExecution + Resource: !Ref DebounceStateMachine + + # EventBridge Rule for S3 Events (first prefix) + # Note: For multiple prefixes, you would need to create additional rules + # or use a Lambda function to filter events + S3EventRule: + Type: AWS::Events::Rule + Properties: + Name: !Sub '${AWS::StackName}-s3-events' + Description: Triggers OpenAPI merge on S3 object changes + State: ENABLED + EventPattern: + source: + - aws.s3 + detail-type: + - Object Created + - Object Deleted + detail: + bucket: + name: + - !Ref InputBucketName + Targets: + - Id: DebounceStateMachine + Arn: !Ref DebounceStateMachine + RoleArn: !GetAtt EventBridgeRole.Arn + + # CloudWatch Alarm for Merge Failures (conditional) + MergeFailureAlarm: + Type: AWS::CloudWatch::Alarm + Condition: CreateAlarms + Properties: + AlarmName: !Sub '${AWS::StackName}-merge-failures' + AlarmDescription: Alarm when OpenAPI merge operations fail + Namespace: OpenApiMerge + MetricName: MergeFailures + Statistic: Sum + Period: 300 + EvaluationPeriods: !Ref AlarmEvaluationPeriods + Threshold: !Ref AlarmThreshold + ComparisonOperator: GreaterThanOrEqualToThreshold + TreatMissingData: notBreaching + AlarmActions: !If + - HasAlarmTopic + - - !Ref AlarmSnsTopicArn + - !Ref AWS::NoValue + +Outputs: + MergeFunctionArn: + Description: ARN of the OpenAPI merge Lambda function + Value: !GetAtt MergeFunction.Arn + Export: + Name: !Sub '${AWS::StackName}-MergeFunctionArn' + + MergeFunctionName: + Description: Name of the OpenAPI merge Lambda function + Value: !Ref MergeFunction + Export: + Name: !Sub '${AWS::StackName}-MergeFunctionName' + + StateMachineArn: + Description: ARN of the debounce Step Functions state machine + Value: !Ref DebounceStateMachine + Export: + Name: !Sub '${AWS::StackName}-StateMachineArn' + + StateMachineName: + Description: Name of the debounce Step Functions state machine + Value: !GetAtt DebounceStateMachine.Name + Export: + Name: !Sub '${AWS::StackName}-StateMachineName' + + DebounceTableName: + Description: Name of the DynamoDB debounce table + Value: !Ref DebounceTable + Export: + Name: !Sub '${AWS::StackName}-DebounceTableName' + + DebounceTableArn: + Description: ARN of the DynamoDB debounce table + Value: !GetAtt DebounceTable.Arn + Export: + Name: !Sub '${AWS::StackName}-DebounceTableArn' + + EventBridgeRuleArn: + Description: ARN of the EventBridge rule for S3 events + Value: !GetAtt S3EventRule.Arn + Export: + Name: !Sub '${AWS::StackName}-EventBridgeRuleArn' + + OutputBucketName: + Description: Name of the output S3 bucket + Value: !If [UseInputBucketAsOutput, !Ref InputBucketName, !Ref OutputBucketName] + Export: + Name: !Sub '${AWS::StackName}-OutputBucketName' diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/AutoDiscoveryPropertyTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/AutoDiscoveryPropertyTests.cs new file mode 100644 index 0000000..30acb30 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/AutoDiscoveryPropertyTests.cs @@ -0,0 +1,350 @@ +using FsCheck; +using FsCheck.Xunit; +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Property-based tests for auto-discovery filtering. +/// Feature: lambda-merge-tool, Property 3: Auto-Discovery Filtering +/// **Validates: Requirements 3.2** +/// +public class AutoDiscoveryPropertyTests +{ + /// + /// Generators for auto-discovery test data. + /// + private static class AutoDiscoveryGenerators + { + /// + /// Generates valid prefix values. + /// + public static Gen PrefixGen() + { + return Gen.Elements( + "publicapi/", "internalapi/", "api/v1/", "services/", + "gateway/", "admin/", ""); + } + + /// + /// Generates valid JSON filenames. + /// + public static Gen JsonFilenameGen() + { + return Gen.Elements( + "users.json", "orders.json", "products.json", "service1.json", + "api.json", "openapi.json", "spec.json", "data.json", + "users-service.json", "orders-api.json", "draft.json"); + } + + /// + /// Generates non-JSON filenames. + /// + public static Gen NonJsonFilenameGen() + { + return Gen.Elements( + "readme.md", "config.yaml", "data.xml", "script.js", + "styles.css", "image.png", "document.txt"); + } + + /// + /// Generates output filenames. + /// + public static Gen OutputFilenameGen() + { + return Gen.Elements( + "merged.json", "output.json", "merged-openapi.json", "api-merged.json"); + } + + /// + /// Generates exclude patterns. + /// + public static Gen ExcludePatternGen() + { + return Gen.Elements( + "*-draft.json", "*.backup.json", "test-*.json", "*-old.json", + "temp*.json", "*-dev.json"); + } + + + /// + /// Generates a set of S3 keys for testing auto-discovery. + /// + public static Gen TestCaseGen() + { + return from prefix in PrefixGen() + from jsonFileCount in Gen.Choose(1, 5) + from jsonFiles in Gen.ListOf(jsonFileCount, JsonFilenameGen()) + from nonJsonFileCount in Gen.Choose(0, 3) + from nonJsonFiles in Gen.ListOf(nonJsonFileCount, NonJsonFilenameGen()) + from outputFile in OutputFilenameGen() + from excludePatternCount in Gen.Choose(0, 2) + from excludePatterns in Gen.ListOf(excludePatternCount, ExcludePatternGen()) + let allKeys = BuildAllKeys(prefix, jsonFiles.Distinct().ToList(), nonJsonFiles.Distinct().ToList()) + select new AutoDiscoveryTestCase( + prefix, + allKeys, + outputFile, + excludePatterns.Distinct().ToList()); + } + + private static List BuildAllKeys(string prefix, List jsonFiles, List nonJsonFiles) + { + var keys = new List(); + + // Always add config.json + keys.Add(prefix + "config.json"); + + // Add JSON files + foreach (var file in jsonFiles) + { + keys.Add(prefix + file); + } + + // Add non-JSON files + foreach (var file in nonJsonFiles) + { + keys.Add(prefix + file); + } + + return keys; + } + } + + /// + /// Test case for auto-discovery testing. + /// + public record AutoDiscoveryTestCase( + string Prefix, + List AllKeys, + string OutputFile, + List ExcludePatterns); + + + private readonly Mock _mockS3Service; + private readonly Mock> _mockLogger; + private readonly SourceDiscovery _sourceDiscovery; + + public AutoDiscoveryPropertyTests() + { + _mockS3Service = new Mock(); + _mockLogger = new Mock>(); + _sourceDiscovery = new SourceDiscovery(_mockS3Service.Object, _mockLogger.Object); + } + + /// + /// Feature: lambda-merge-tool, Property 3: Auto-Discovery Filtering + /// For any set of S3 keys within a prefix, when autoDiscover is true, + /// the discovered sources SHALL include only .json files. + /// **Validates: Requirements 3.2** + /// + [Property(MaxTest = 100)] + public Property AutoDiscover_OnlyIncludesJsonFiles() + { + return Prop.ForAll( + AutoDiscoveryGenerators.TestCaseGen().ToArbitrary(), + testCase => + { + // Arrange + _mockS3Service + .Setup(x => x.ListObjectsAsync(It.IsAny(), testCase.Prefix, It.IsAny())) + .ReturnsAsync(testCase.AllKeys); + + var config = new LambdaMergeConfig + { + AutoDiscover = true, + Output = testCase.OutputFile, + ExcludePatterns = testCase.ExcludePatterns, + Info = new Oproto.Lambda.OpenApi.Merge.MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - all discovered sources should be JSON files + var allJson = result.All(s => s.Key.EndsWith(".json", StringComparison.OrdinalIgnoreCase)); + + return allJson.Label($"All discovered sources should be JSON files. Found: {string.Join(", ", result.Select(s => s.Key))}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 3: Auto-Discovery Filtering + /// For any set of S3 keys within a prefix, when autoDiscover is true, + /// the discovered sources SHALL exclude config.json. + /// **Validates: Requirements 3.2** + /// + [Property(MaxTest = 100)] + public Property AutoDiscover_ExcludesConfigJson() + { + return Prop.ForAll( + AutoDiscoveryGenerators.TestCaseGen().ToArbitrary(), + testCase => + { + // Arrange + _mockS3Service + .Setup(x => x.ListObjectsAsync(It.IsAny(), testCase.Prefix, It.IsAny())) + .ReturnsAsync(testCase.AllKeys); + + var config = new LambdaMergeConfig + { + AutoDiscover = true, + Output = testCase.OutputFile, + ExcludePatterns = testCase.ExcludePatterns, + Info = new Oproto.Lambda.OpenApi.Merge.MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - config.json should not be in results + var configExcluded = !result.Any(s => + s.Key.EndsWith("/config.json", StringComparison.OrdinalIgnoreCase) || + s.Key.Equals("config.json", StringComparison.OrdinalIgnoreCase)); + + return configExcluded.Label($"config.json should be excluded. Found keys: {string.Join(", ", result.Select(s => s.Key))}"); + }); + } + + + /// + /// Feature: lambda-merge-tool, Property 3: Auto-Discovery Filtering + /// For any set of S3 keys within a prefix, when autoDiscover is true, + /// the discovered sources SHALL exclude the configured output file. + /// **Validates: Requirements 3.2** + /// + [Property(MaxTest = 100)] + public Property AutoDiscover_ExcludesOutputFile() + { + return Prop.ForAll( + AutoDiscoveryGenerators.TestCaseGen().ToArbitrary(), + testCase => + { + // Arrange - add the output file to the list of keys + var keysWithOutput = new List(testCase.AllKeys); + var outputKey = string.IsNullOrEmpty(testCase.Prefix) + ? testCase.OutputFile + : testCase.Prefix + testCase.OutputFile; + if (!keysWithOutput.Contains(outputKey)) + { + keysWithOutput.Add(outputKey); + } + + _mockS3Service + .Setup(x => x.ListObjectsAsync(It.IsAny(), testCase.Prefix, It.IsAny())) + .ReturnsAsync(keysWithOutput); + + var config = new LambdaMergeConfig + { + AutoDiscover = true, + Output = testCase.OutputFile, + ExcludePatterns = testCase.ExcludePatterns, + Info = new Oproto.Lambda.OpenApi.Merge.MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - output file should not be in results + var outputExcluded = !result.Any(s => + s.Key.Equals(outputKey, StringComparison.OrdinalIgnoreCase)); + + return outputExcluded.Label($"Output file '{outputKey}' should be excluded. Found keys: {string.Join(", ", result.Select(s => s.Key))}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 3: Auto-Discovery Filtering + /// For any set of S3 keys within a prefix, when autoDiscover is true, + /// the discovered sources SHALL exclude files matching excludePatterns. + /// **Validates: Requirements 3.2** + /// + [Property(MaxTest = 100)] + public Property AutoDiscover_ExcludesFilesMatchingExcludePatterns() + { + return Prop.ForAll( + AutoDiscoveryGenerators.TestCaseGen().ToArbitrary(), + testCase => + { + // Arrange + _mockS3Service + .Setup(x => x.ListObjectsAsync(It.IsAny(), testCase.Prefix, It.IsAny())) + .ReturnsAsync(testCase.AllKeys); + + var config = new LambdaMergeConfig + { + AutoDiscover = true, + Output = testCase.OutputFile, + ExcludePatterns = testCase.ExcludePatterns, + Info = new Oproto.Lambda.OpenApi.Merge.MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - no discovered source should match any exclude pattern + var noExcludedFiles = true; + foreach (var source in result) + { + var filename = GetFilename(source.Key); + foreach (var pattern in testCase.ExcludePatterns) + { + if (MatchesGlobPattern(filename, pattern)) + { + noExcludedFiles = false; + break; + } + } + if (!noExcludedFiles) break; + } + + return noExcludedFiles.Label($"No discovered source should match exclude patterns. Patterns: [{string.Join(", ", testCase.ExcludePatterns)}], Found: [{string.Join(", ", result.Select(s => s.Key))}]"); + }); + } + + private static string GetFilename(string key) + { + var lastSlash = key.LastIndexOf('/'); + return lastSlash >= 0 ? key.Substring(lastSlash + 1) : key; + } + + private static bool MatchesGlobPattern(string filename, string pattern) + { + if (string.IsNullOrEmpty(pattern)) + { + return false; + } + + var regexPattern = "^" + System.Text.RegularExpressions.Regex.Escape(pattern) + .Replace("\\*", ".*") + .Replace("\\?", ".") + "$"; + + return System.Text.RegularExpressions.Regex.IsMatch( + filename, + regexPattern, + System.Text.RegularExpressions.RegexOptions.IgnoreCase); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ConditionalWritePropertyTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ConditionalWritePropertyTests.cs new file mode 100644 index 0000000..63557d6 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ConditionalWritePropertyTests.cs @@ -0,0 +1,329 @@ +using FsCheck; +using FsCheck.Xunit; +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; +using System.Text.Json; +using System.Text.Json.Nodes; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Property-based tests for conditional write correctness. +/// Feature: lambda-merge-tool, Property 6: Conditional Write Correctness +/// **Validates: Requirements 5.3, 5.4** +/// +public class ConditionalWritePropertyTests +{ + /// + /// Generators for conditional write test data. + /// + private static class ConditionalWriteGenerators + { + /// + /// Generates valid OpenAPI-like JSON objects. + /// + public static Gen OpenApiJsonGen() + { + return from title in Gen.Elements("Test API", "My API", "Sample API", "Product API") + from version in Gen.Elements("1.0.0", "2.0.0", "1.1.0", "3.0.0") + from pathCount in Gen.Choose(1, 3) + from paths in Gen.ListOf(pathCount, PathGen()) + select BuildOpenApiJson(title, version, paths.ToList()); + } + + private static Gen<(string Path, string Method, string Summary)> PathGen() + { + return from path in Gen.Elements("/users", "/products", "/orders", "/items", "/api/v1/data") + from method in Gen.Elements("get", "post", "put", "delete") + from summary in Gen.Elements("Get resource", "Create resource", "Update resource", "Delete resource") + select (path, method, summary); + } + + private static JsonObject BuildOpenApiJson( + string title, + string version, + List<(string Path, string Method, string Summary)> paths) + { + var doc = new JsonObject + { + ["openapi"] = "3.0.0", + ["info"] = new JsonObject + { + ["title"] = title, + ["version"] = version + } + }; + + var pathsObj = new JsonObject(); + foreach (var (path, method, summary) in paths) + { + if (!pathsObj.ContainsKey(path)) + { + pathsObj[path] = new JsonObject(); + } + var pathItem = pathsObj[path]!.AsObject(); + pathItem[method] = new JsonObject + { + ["summary"] = summary, + ["responses"] = new JsonObject + { + ["200"] = new JsonObject + { + ["description"] = "Success" + } + } + }; + } + doc["paths"] = pathsObj; + + return doc; + } + + /// + /// Generates a test case for conditional write testing. + /// + public static Gen IdenticalContentTestCaseGen() + { + return from jsonObj in OpenApiJsonGen() + from bucket in Gen.Elements("test-bucket", "my-bucket", "api-bucket") + from key in Gen.Elements("api/merged.json", "output/openapi.json", "specs/api.json") + let originalJson = jsonObj.ToJsonString(new JsonSerializerOptions { WriteIndented = true }) + // Create a formatting variation that is semantically identical + let variedJson = CreateFormattingVariation(jsonObj) + select new ConditionalWriteTestCase(bucket, key, originalJson, variedJson, true); + } + + /// + /// Generates a test case where content differs. + /// + public static Gen DifferentContentTestCaseGen() + { + return from jsonObj1 in OpenApiJsonGen() + from jsonObj2 in OpenApiJsonGen() + from bucket in Gen.Elements("test-bucket", "my-bucket", "api-bucket") + from key in Gen.Elements("api/merged.json", "output/openapi.json", "specs/api.json") + let json1 = jsonObj1.ToJsonString(new JsonSerializerOptions { WriteIndented = true }) + let json2 = jsonObj2.ToJsonString(new JsonSerializerOptions { WriteIndented = true }) + // Only use if they're actually different + where json1 != json2 + select new ConditionalWriteTestCase(bucket, key, json1, json2, false); + } + + /// + /// Generates a test case where no existing content exists. + /// + public static Gen NoExistingContentTestCaseGen() + { + return from jsonObj in OpenApiJsonGen() + from bucket in Gen.Elements("test-bucket", "my-bucket", "api-bucket") + from key in Gen.Elements("api/merged.json", "output/openapi.json", "specs/api.json") + let newJson = jsonObj.ToJsonString(new JsonSerializerOptions { WriteIndented = true }) + select new ConditionalWriteTestCase(bucket, key, null, newJson, false); + } + + private static string CreateFormattingVariation(JsonObject original) + { + // Reverse property order to create a formatting variation + var reversed = new JsonObject(); + var properties = original.ToList(); + properties.Reverse(); + foreach (var kvp in properties) + { + reversed[kvp.Key] = JsonNode.Parse(kvp.Value?.ToJsonString() ?? "null"); + } + return reversed.ToJsonString(new JsonSerializerOptions { WriteIndented = false }); + } + } + + /// + /// Test case for conditional write testing. + /// + public record ConditionalWriteTestCase( + string Bucket, + string Key, + string? ExistingContent, + string NewContent, + bool ShouldBeIdentical); + + + private readonly Mock _mockS3Service; + private readonly Mock> _mockOutputComparerLogger; + private readonly Mock> _mockConditionalWriterLogger; + private readonly OutputComparer _outputComparer; + private readonly ConditionalWriter _conditionalWriter; + + public ConditionalWritePropertyTests() + { + _mockS3Service = new Mock(); + _mockOutputComparerLogger = new Mock>(); + _mockConditionalWriterLogger = new Mock>(); + _outputComparer = new OutputComparer(_mockOutputComparerLogger.Object); + _conditionalWriter = new ConditionalWriter( + _mockS3Service.Object, + _outputComparer, + _mockConditionalWriterLogger.Object); + } + + /// + /// Feature: lambda-merge-tool, Property 6: Conditional Write Correctness + /// For any merge operation, if the new merged spec is semantically identical to the + /// existing output spec, the write operation SHALL be skipped. + /// **Validates: Requirements 5.3, 5.4** + /// + [Property(MaxTest = 100)] + public Property IdenticalContent_SkipsWrite() + { + return Prop.ForAll( + ConditionalWriteGenerators.IdenticalContentTestCaseGen().ToArbitrary(), + testCase => + { + // Arrange + _mockS3Service.Reset(); + _mockS3Service + .Setup(x => x.ReadTextAsync(testCase.Bucket, testCase.Key, It.IsAny())) + .ReturnsAsync(testCase.ExistingContent); + + // Act + var result = _conditionalWriter.WriteIfChangedAsync( + testCase.Bucket, + testCase.Key, + testCase.NewContent) + .GetAwaiter().GetResult(); + + // Assert - write should be skipped for identical content + var writeSkipped = !result.WasWritten; + var reasonCorrect = result.Reason == "Content unchanged"; + + // Verify WriteTextAsync was NOT called + _mockS3Service.Verify( + x => x.WriteTextAsync(It.IsAny(), It.IsAny(), It.IsAny(), It.IsAny(), It.IsAny()), + Times.Never); + + return (writeSkipped && reasonCorrect).Label( + $"Write should be skipped for identical content.\n" + + $"WasWritten: {result.WasWritten}, Reason: {result.Reason}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 6: Conditional Write Correctness + /// For any merge operation, if the new merged spec differs from the existing output spec, + /// the write operation SHALL occur. + /// **Validates: Requirements 5.3, 5.4** + /// + [Property(MaxTest = 100)] + public Property DifferentContent_PerformsWrite() + { + return Prop.ForAll( + ConditionalWriteGenerators.DifferentContentTestCaseGen().ToArbitrary(), + testCase => + { + // Arrange + _mockS3Service.Reset(); + _mockS3Service + .Setup(x => x.ReadTextAsync(testCase.Bucket, testCase.Key, It.IsAny())) + .ReturnsAsync(testCase.ExistingContent); + _mockS3Service + .Setup(x => x.WriteTextAsync(testCase.Bucket, testCase.Key, testCase.NewContent, "application/json", It.IsAny())) + .Returns(Task.CompletedTask); + + // Act + var result = _conditionalWriter.WriteIfChangedAsync( + testCase.Bucket, + testCase.Key, + testCase.NewContent) + .GetAwaiter().GetResult(); + + // Assert - write should occur for different content + var writeOccurred = result.WasWritten; + var reasonCorrect = result.Reason == "Content changed"; + + // Verify WriteTextAsync WAS called + _mockS3Service.Verify( + x => x.WriteTextAsync(testCase.Bucket, testCase.Key, testCase.NewContent, "application/json", It.IsAny()), + Times.Once); + + return (writeOccurred && reasonCorrect).Label( + $"Write should occur for different content.\n" + + $"WasWritten: {result.WasWritten}, Reason: {result.Reason}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 6: Conditional Write Correctness + /// For any merge operation where no existing output exists, the write operation SHALL occur. + /// **Validates: Requirements 5.3, 5.4** + /// + [Property(MaxTest = 100)] + public Property NoExistingContent_PerformsWrite() + { + return Prop.ForAll( + ConditionalWriteGenerators.NoExistingContentTestCaseGen().ToArbitrary(), + testCase => + { + // Arrange + _mockS3Service.Reset(); + _mockS3Service + .Setup(x => x.ReadTextAsync(testCase.Bucket, testCase.Key, It.IsAny())) + .ReturnsAsync((string?)null); + _mockS3Service + .Setup(x => x.WriteTextAsync(testCase.Bucket, testCase.Key, testCase.NewContent, "application/json", It.IsAny())) + .Returns(Task.CompletedTask); + + // Act + var result = _conditionalWriter.WriteIfChangedAsync( + testCase.Bucket, + testCase.Key, + testCase.NewContent) + .GetAwaiter().GetResult(); + + // Assert - write should occur when no existing content + var writeOccurred = result.WasWritten; + var reasonCorrect = result.Reason == "File did not exist"; + + // Verify WriteTextAsync WAS called + _mockS3Service.Verify( + x => x.WriteTextAsync(testCase.Bucket, testCase.Key, testCase.NewContent, "application/json", It.IsAny()), + Times.Once); + + return (writeOccurred && reasonCorrect).Label( + $"Write should occur when no existing content.\n" + + $"WasWritten: {result.WasWritten}, Reason: {result.Reason}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 6: Conditional Write Correctness + /// For any conditional write operation, the output key in the result SHALL match the input key. + /// **Validates: Requirements 5.3, 5.4** + /// + [Property(MaxTest = 100)] + public Property OutputKey_MatchesInputKey() + { + return Prop.ForAll( + ConditionalWriteGenerators.IdenticalContentTestCaseGen().ToArbitrary(), + testCase => + { + // Arrange + _mockS3Service.Reset(); + _mockS3Service + .Setup(x => x.ReadTextAsync(testCase.Bucket, testCase.Key, It.IsAny())) + .ReturnsAsync(testCase.ExistingContent); + + // Act + var result = _conditionalWriter.WriteIfChangedAsync( + testCase.Bucket, + testCase.Key, + testCase.NewContent) + .GetAwaiter().GetResult(); + + // Assert - output key should match input key + var keyMatches = result.OutputKey == testCase.Key; + + return keyMatches.Label( + $"Output key should match input key.\n" + + $"Expected: {testCase.Key}, Actual: {result.OutputKey}"); + }); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ConfigLoaderTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ConfigLoaderTests.cs new file mode 100644 index 0000000..d293630 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ConfigLoaderTests.cs @@ -0,0 +1,284 @@ +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Unit tests for ConfigLoader. +/// +public class ConfigLoaderTests +{ + private readonly Mock _mockS3Service; + private readonly Mock> _mockLogger; + private readonly ConfigLoader _configLoader; + + public ConfigLoaderTests() + { + _mockS3Service = new Mock(); + _mockLogger = new Mock>(); + _configLoader = new ConfigLoader(_mockS3Service.Object, _mockLogger.Object); + } + + #region LoadConfigAsync - Valid Config Tests + + [Fact] + public async Task LoadConfigAsync_ValidConfig_ReturnsConfig() + { + // Arrange + var validJson = """ + { + "info": { + "title": "Test API", + "version": "1.0.0", + "description": "A test API" + }, + "autoDiscover": true, + "output": "merged.json" + } + """; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "publicapi/config.json", It.IsAny())) + .ReturnsAsync(validJson); + + // Act + var config = await _configLoader.LoadConfigAsync("test-bucket", "publicapi/"); + + // Assert + Assert.NotNull(config); + Assert.Equal("Test API", config.Info.Title); + Assert.Equal("1.0.0", config.Info.Version); + Assert.Equal("A test API", config.Info.Description); + Assert.True(config.AutoDiscover); + Assert.Equal("merged.json", config.Output); + } + + [Fact] + public async Task LoadConfigAsync_ValidConfigWithExplicitSources_ReturnsConfig() + { + // Arrange + var validJson = """ + { + "info": { + "title": "Test API", + "version": "1.0.0" + }, + "autoDiscover": false, + "sources": [ + { + "path": "users.json", + "name": "Users" + } + ], + "output": "merged.json" + } + """; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "api/config.json", It.IsAny())) + .ReturnsAsync(validJson); + + // Act + var config = await _configLoader.LoadConfigAsync("test-bucket", "api/"); + + // Assert + Assert.NotNull(config); + Assert.False(config.AutoDiscover); + Assert.Single(config.Sources); + Assert.Equal("users.json", config.Sources[0].Path); + } + + [Fact] + public async Task LoadConfigAsync_PrefixWithoutTrailingSlash_AddsSlash() + { + // Arrange + var validJson = """ + { + "info": { + "title": "Test API", + "version": "1.0.0" + }, + "autoDiscover": true, + "output": "merged.json" + } + """; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "publicapi/config.json", It.IsAny())) + .ReturnsAsync(validJson); + + // Act - prefix without trailing slash + var config = await _configLoader.LoadConfigAsync("test-bucket", "publicapi"); + + // Assert + Assert.NotNull(config); + _mockS3Service.Verify( + x => x.ReadTextAsync("test-bucket", "publicapi/config.json", It.IsAny()), + Times.Once); + } + + #endregion + + #region LoadConfigAsync - Missing Config Tests + + [Fact] + public async Task LoadConfigAsync_ConfigNotFound_ThrowsConfigNotFoundException() + { + // Arrange + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "missing/config.json", It.IsAny())) + .ReturnsAsync((string?)null); + + // Act & Assert + var exception = await Assert.ThrowsAsync( + () => _configLoader.LoadConfigAsync("test-bucket", "missing/")); + + Assert.Equal("test-bucket", exception.Bucket); + Assert.Equal("missing/config.json", exception.Key); + Assert.Contains("Configuration file not found", exception.Message); + } + + #endregion + + #region LoadConfigAsync - Invalid JSON Tests + + [Fact] + public async Task LoadConfigAsync_InvalidJson_ThrowsInvalidConfigException() + { + // Arrange + var invalidJson = "{ not valid json {{{{"; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "api/config.json", It.IsAny())) + .ReturnsAsync(invalidJson); + + // Act & Assert + var exception = await Assert.ThrowsAsync( + () => _configLoader.LoadConfigAsync("test-bucket", "api/")); + + Assert.Equal("test-bucket", exception.Bucket); + Assert.Equal("api/config.json", exception.Key); + Assert.Contains("Invalid JSON", exception.Message); + } + + [Fact] + public async Task LoadConfigAsync_EmptyJson_ThrowsInvalidConfigException() + { + // Arrange - empty object will fail validation for missing required fields + var emptyJson = "{}"; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "api/config.json", It.IsAny())) + .ReturnsAsync(emptyJson); + + // Act & Assert + var exception = await Assert.ThrowsAsync( + () => _configLoader.LoadConfigAsync("test-bucket", "api/")); + + Assert.Contains("Missing required field", exception.Message); + } + + #endregion + + #region LoadConfigAsync - Validation Tests + + [Fact] + public async Task LoadConfigAsync_MissingTitle_ThrowsInvalidConfigException() + { + // Arrange + var json = """ + { + "info": { + "version": "1.0.0" + }, + "autoDiscover": true, + "output": "merged.json" + } + """; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "api/config.json", It.IsAny())) + .ReturnsAsync(json); + + // Act & Assert + var exception = await Assert.ThrowsAsync( + () => _configLoader.LoadConfigAsync("test-bucket", "api/")); + + Assert.Contains("info.title", exception.Message); + } + + [Fact] + public async Task LoadConfigAsync_MissingVersion_ThrowsInvalidConfigException() + { + // Arrange + var json = """ + { + "info": { + "title": "Test API" + }, + "autoDiscover": true, + "output": "merged.json" + } + """; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "api/config.json", It.IsAny())) + .ReturnsAsync(json); + + // Act & Assert + var exception = await Assert.ThrowsAsync( + () => _configLoader.LoadConfigAsync("test-bucket", "api/")); + + Assert.Contains("info.version", exception.Message); + } + + [Fact] + public async Task LoadConfigAsync_NoSourcesAndAutoDiscoverFalse_ThrowsInvalidConfigException() + { + // Arrange + var json = """ + { + "info": { + "title": "Test API", + "version": "1.0.0" + }, + "autoDiscover": false, + "sources": [], + "output": "merged.json" + } + """; + + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", "api/config.json", It.IsAny())) + .ReturnsAsync(json); + + // Act & Assert + var exception = await Assert.ThrowsAsync( + () => _configLoader.LoadConfigAsync("test-bucket", "api/")); + + Assert.Contains("No sources specified and autoDiscover is disabled", exception.Message); + } + + #endregion + + #region ExtractPrefix Tests + + [Theory] + [InlineData("publicapi/config.json", "publicapi/")] + [InlineData("internal/v2/service.json", "internal/v2/")] + [InlineData("api/users/openapi.json", "api/users/")] + [InlineData("config.json", "")] + [InlineData("", "")] + public void ExtractPrefix_VariousKeys_ReturnsExpectedPrefix(string key, string expectedPrefix) + { + // Act + var result = _configLoader.ExtractPrefix(key); + + // Assert + Assert.Equal(expectedPrefix, result); + } + + #endregion +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ExplicitSourcesPropertyTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ExplicitSourcesPropertyTests.cs new file mode 100644 index 0000000..286a4ee --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/ExplicitSourcesPropertyTests.cs @@ -0,0 +1,337 @@ +using FsCheck; +using FsCheck.Xunit; +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Property-based tests for explicit sources validation. +/// Feature: lambda-merge-tool, Property 4: Explicit Sources Validation +/// **Validates: Requirements 3.3** +/// +public class ExplicitSourcesPropertyTests +{ + /// + /// Generators for explicit sources test data. + /// + private static class ExplicitSourcesGenerators + { + /// + /// Generates valid prefix values. + /// + public static Gen PrefixGen() + { + return Gen.Elements( + "publicapi/", "internalapi/", "api/v1/", "services/", + "gateway/", "admin/", ""); + } + + /// + /// Generates valid source file paths. + /// + public static Gen SourcePathGen() + { + return Gen.Elements( + "users.json", "orders.json", "products.json", "service1.json", + "api.json", "openapi.json", "spec.json", "data.json", + "users-service.json", "orders-api.json"); + } + + /// + /// Generates optional source names. + /// + public static Gen SourceNameGen() + { + return Gen.OneOf( + Gen.Constant(null), + Gen.Elements("Users", "Orders", "Products", "Service1", "API", "Data")); + } + + /// + /// Generates optional path prefixes. + /// + public static Gen PathPrefixGen() + { + return Gen.OneOf( + Gen.Constant(null), + Gen.Elements("/users", "/orders", "/products", "/api/v1", "/admin")); + } + + /// + /// Generates a single source configuration. + /// + public static Gen SourceConfigGen() + { + return from path in SourcePathGen() + from name in SourceNameGen() + from pathPrefix in PathPrefixGen() + select new SourceConfiguration + { + Path = path, + Name = name, + PathPrefix = pathPrefix + }; + } + + + /// + /// Generates a test case with explicit sources. + /// + public static Gen TestCaseWithSourcesGen() + { + return from prefix in PrefixGen() + from sourceCount in Gen.Choose(1, 5) + from sources in Gen.ListOf(sourceCount, SourceConfigGen()) + let distinctSources = sources.GroupBy(s => s.Path).Select(g => g.First()).ToList() + select new ExplicitSourcesTestCase(prefix, distinctSources); + } + + /// + /// Generates a test case with empty sources (for validation testing). + /// + public static Gen TestCaseWithEmptySourcesGen() + { + return from prefix in PrefixGen() + select new ExplicitSourcesTestCase(prefix, new List()); + } + + /// + /// Generates a test case with null sources (for validation testing). + /// + public static Gen TestCaseWithNullSourcesGen() + { + return from prefix in PrefixGen() + select new ExplicitSourcesTestCase(prefix, null); + } + } + + /// + /// Test case for explicit sources testing. + /// + public record ExplicitSourcesTestCase( + string Prefix, + List? Sources); + + private readonly Mock _mockS3Service; + private readonly Mock> _mockLogger; + private readonly SourceDiscovery _sourceDiscovery; + + public ExplicitSourcesPropertyTests() + { + _mockS3Service = new Mock(); + _mockLogger = new Mock>(); + _sourceDiscovery = new SourceDiscovery(_mockS3Service.Object, _mockLogger.Object); + } + + + /// + /// Feature: lambda-merge-tool, Property 4: Explicit Sources Validation + /// For any LambdaMergeConfig where autoDiscover is false and sources are provided, + /// the discovered sources SHALL match the explicit sources list. + /// **Validates: Requirements 3.3** + /// + [Property(MaxTest = 100)] + public Property ExplicitSources_ReturnsAllConfiguredSources() + { + return Prop.ForAll( + ExplicitSourcesGenerators.TestCaseWithSourcesGen().ToArbitrary(), + testCase => + { + // Arrange + var config = new LambdaMergeConfig + { + AutoDiscover = false, + Sources = testCase.Sources!, + Output = "merged.json", + Info = new MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - should return same number of sources + var countMatches = result.Count == testCase.Sources!.Count; + + return countMatches.Label($"Expected {testCase.Sources.Count} sources, got {result.Count}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 4: Explicit Sources Validation + /// For any LambdaMergeConfig where autoDiscover is false, + /// each discovered source SHALL have the correct S3 key constructed from prefix + path. + /// **Validates: Requirements 3.3** + /// + [Property(MaxTest = 100)] + public Property ExplicitSources_ConstructsCorrectS3Keys() + { + return Prop.ForAll( + ExplicitSourcesGenerators.TestCaseWithSourcesGen().ToArbitrary(), + testCase => + { + // Arrange + var config = new LambdaMergeConfig + { + AutoDiscover = false, + Sources = testCase.Sources!, + Output = "merged.json", + Info = new MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - each source key should be prefix + path + var allKeysCorrect = true; + for (int i = 0; i < testCase.Sources!.Count; i++) + { + var expectedKey = BuildExpectedKey(testCase.Prefix, testCase.Sources[i].Path); + if (result[i].Key != expectedKey) + { + allKeysCorrect = false; + break; + } + } + + return allKeysCorrect.Label($"All source keys should be correctly constructed from prefix + path"); + }); + } + + + /// + /// Feature: lambda-merge-tool, Property 4: Explicit Sources Validation + /// For any LambdaMergeConfig where autoDiscover is false, + /// each discovered source SHALL preserve the explicit configuration. + /// **Validates: Requirements 3.3** + /// + [Property(MaxTest = 100)] + public Property ExplicitSources_PreservesExplicitConfiguration() + { + return Prop.ForAll( + ExplicitSourcesGenerators.TestCaseWithSourcesGen().ToArbitrary(), + testCase => + { + // Arrange + var config = new LambdaMergeConfig + { + AutoDiscover = false, + Sources = testCase.Sources!, + Output = "merged.json", + Info = new MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - each discovered source should have the explicit config attached + var allConfigsPreserved = true; + for (int i = 0; i < testCase.Sources!.Count; i++) + { + var discoveredSource = result[i]; + var originalConfig = testCase.Sources[i]; + + if (discoveredSource.ExplicitConfig == null || + discoveredSource.ExplicitConfig.Path != originalConfig.Path || + discoveredSource.ExplicitConfig.PathPrefix != originalConfig.PathPrefix) + { + allConfigsPreserved = false; + break; + } + } + + return allConfigsPreserved.Label($"All explicit configurations should be preserved in discovered sources"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 4: Explicit Sources Validation + /// For any LambdaMergeConfig where autoDiscover is false, + /// the source name SHALL be the explicit name if provided, or derived from filename. + /// **Validates: Requirements 3.3** + /// + [Property(MaxTest = 100)] + public Property ExplicitSources_UsesExplicitNameOrDerivedName() + { + return Prop.ForAll( + ExplicitSourcesGenerators.TestCaseWithSourcesGen().ToArbitrary(), + testCase => + { + // Arrange + var config = new LambdaMergeConfig + { + AutoDiscover = false, + Sources = testCase.Sources!, + Output = "merged.json", + Info = new MergeInfoConfiguration + { + Title = "Test", + Version = "1.0" + } + }; + + // Act + var result = _sourceDiscovery.DiscoverSourcesAsync("test-bucket", testCase.Prefix, config) + .GetAwaiter().GetResult(); + + // Assert - name should be explicit name or derived from filename + var allNamesCorrect = true; + for (int i = 0; i < testCase.Sources!.Count; i++) + { + var discoveredSource = result[i]; + var originalConfig = testCase.Sources[i]; + + var expectedName = originalConfig.Name ?? GetNameFromFilename(originalConfig.Path); + if (discoveredSource.Name != expectedName) + { + allNamesCorrect = false; + break; + } + } + + return allNamesCorrect.Label($"Source names should be explicit name or derived from filename"); + }); + } + + private static string BuildExpectedKey(string prefix, string path) + { + if (string.IsNullOrEmpty(prefix)) + { + return path; + } + + if (!prefix.EndsWith("/")) + { + prefix += "/"; + } + + return prefix + path; + } + + private static string GetNameFromFilename(string filename) + { + if (filename.EndsWith(".json", StringComparison.OrdinalIgnoreCase)) + { + return filename.Substring(0, filename.Length - 5); + } + return filename; + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/GlobalUsings.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/GlobalUsings.cs new file mode 100644 index 0000000..c802f44 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/GlobalUsings.cs @@ -0,0 +1 @@ +global using Xunit; diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/LambdaMergeConfigTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/LambdaMergeConfigTests.cs new file mode 100644 index 0000000..dca4590 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/LambdaMergeConfigTests.cs @@ -0,0 +1,192 @@ +using System.Text.Json; +using Oproto.Lambda.OpenApi.Merge; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Unit tests for LambdaMergeConfig deserialization and default values. +/// +public class LambdaMergeConfigTests +{ + [Fact] + public void Deserialize_MinimalConfig_SetsDefaults() + { + // Arrange + var json = """ + { + "info": { + "title": "Test API", + "version": "1.0.0" + }, + "output": "merged.json" + } + """; + + // Act + var config = JsonSerializer.Deserialize(json); + + // Assert + Assert.NotNull(config); + Assert.Equal("Test API", config.Info.Title); + Assert.Equal("1.0.0", config.Info.Version); + Assert.Equal("merged.json", config.Output); + Assert.False(config.AutoDiscover); + Assert.Empty(config.ExcludePatterns); + Assert.Empty(config.Sources); + Assert.Empty(config.Servers); + Assert.Null(config.OutputBucket); + Assert.Equal(SchemaConflictStrategy.Rename, config.SchemaConflict); + } + + [Fact] + public void Deserialize_FullConfig_SetsAllProperties() + { + // Arrange + var json = """ + { + "info": { + "title": "Full API", + "version": "2.0.0", + "description": "A complete API" + }, + "servers": [ + { + "url": "https://api.example.com", + "description": "Production" + } + ], + "autoDiscover": true, + "excludePatterns": ["*-draft.json", "*.backup.json"], + "output": "api.json", + "outputBucket": "output-bucket", + "schemaConflict": "fail" + } + """; + + // Act + var config = JsonSerializer.Deserialize(json); + + // Assert + Assert.NotNull(config); + Assert.Equal("Full API", config.Info.Title); + Assert.Equal("2.0.0", config.Info.Version); + Assert.Equal("A complete API", config.Info.Description); + Assert.Single(config.Servers); + Assert.Equal("https://api.example.com", config.Servers[0].Url); + Assert.Equal("Production", config.Servers[0].Description); + Assert.True(config.AutoDiscover); + Assert.Equal(2, config.ExcludePatterns.Count); + Assert.Contains("*-draft.json", config.ExcludePatterns); + Assert.Contains("*.backup.json", config.ExcludePatterns); + Assert.Equal("api.json", config.Output); + Assert.Equal("output-bucket", config.OutputBucket); + Assert.Equal(SchemaConflictStrategy.Fail, config.SchemaConflict); + } + + [Fact] + public void Deserialize_WithExplicitSources_LoadsSources() + { + // Arrange + var json = """ + { + "info": { + "title": "API with Sources", + "version": "1.0.0" + }, + "autoDiscover": false, + "sources": [ + { + "path": "users.json", + "name": "Users", + "pathPrefix": "/users" + }, + { + "path": "orders.json", + "name": "Orders" + } + ], + "output": "merged.json" + } + """; + + // Act + var config = JsonSerializer.Deserialize(json); + + // Assert + Assert.NotNull(config); + Assert.False(config.AutoDiscover); + Assert.Equal(2, config.Sources.Count); + Assert.Equal("users.json", config.Sources[0].Path); + Assert.Equal("Users", config.Sources[0].Name); + Assert.Equal("/users", config.Sources[0].PathPrefix); + Assert.Equal("orders.json", config.Sources[1].Path); + Assert.Equal("Orders", config.Sources[1].Name); + } + + [Fact] + public void Deserialize_OutputBucketNull_WhenNotSpecified() + { + // Arrange + var json = """ + { + "info": { + "title": "Test", + "version": "1.0.0" + } + } + """; + + // Act + var config = JsonSerializer.Deserialize(json); + + // Assert + Assert.NotNull(config); + Assert.Null(config.OutputBucket); + } + + [Fact] + public void Serialize_RoundTrip_PreservesAllProperties() + { + // Arrange + var original = new LambdaMergeConfig + { + Info = new MergeInfoConfiguration + { + Title = "Round Trip API", + Version = "3.0.0", + Description = "Testing round trip" + }, + AutoDiscover = true, + ExcludePatterns = new List { "*.draft.json" }, + Output = "output.json", + OutputBucket = "my-bucket", + SchemaConflict = SchemaConflictStrategy.FirstWins + }; + + // Act + var json = JsonSerializer.Serialize(original); + var deserialized = JsonSerializer.Deserialize(json); + + // Assert + Assert.NotNull(deserialized); + Assert.Equal(original.Info.Title, deserialized.Info.Title); + Assert.Equal(original.Info.Version, deserialized.Info.Version); + Assert.Equal(original.Info.Description, deserialized.Info.Description); + Assert.Equal(original.AutoDiscover, deserialized.AutoDiscover); + Assert.Equal(original.ExcludePatterns, deserialized.ExcludePatterns); + Assert.Equal(original.Output, deserialized.Output); + Assert.Equal(original.OutputBucket, deserialized.OutputBucket); + Assert.Equal(original.SchemaConflict, deserialized.SchemaConflict); + } + + [Fact] + public void LambdaMergeConfig_InheritsFromMergeConfiguration() + { + // Arrange & Act + var config = new LambdaMergeConfig(); + + // Assert - verify inheritance + Assert.IsAssignableFrom(config); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/MergeFunctionTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/MergeFunctionTests.cs new file mode 100644 index 0000000..59107c7 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/MergeFunctionTests.cs @@ -0,0 +1,406 @@ +using Amazon.Lambda.Core; +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge; +using Oproto.Lambda.OpenApi.Merge.Lambda.Functions; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Unit tests for MergeFunction. +/// +public class MergeFunctionTests +{ + private readonly Mock _mockS3Service; + private readonly Mock _mockConfigLoader; + private readonly Mock _mockSourceDiscovery; + private readonly Mock _mockConditionalWriter; + private readonly Mock _mockMetricsService; + private readonly Mock> _mockLogger; + private readonly Mock _mockContext; + private readonly MergeFunction _mergeFunction; + + public MergeFunctionTests() + { + _mockS3Service = new Mock(); + _mockConfigLoader = new Mock(); + _mockSourceDiscovery = new Mock(); + _mockConditionalWriter = new Mock(); + _mockMetricsService = new Mock(); + _mockLogger = new Mock>(); + _mockContext = new Mock(); + + _mergeFunction = new MergeFunction( + _mockS3Service.Object, + _mockConfigLoader.Object, + _mockSourceDiscovery.Object, + _mockConditionalWriter.Object, + _mockMetricsService.Object, + _mockLogger.Object); + } + + #region Successful Merge Tests + + [Fact] + public async Task Merge_SuccessfulMerge_ReturnsSuccessResponse() + { + // Arrange + var request = new MergeRequest("test-bucket", "publicapi/"); + var config = CreateValidConfig(); + var sources = CreateDiscoveredSources(); + var openApiContent = CreateValidOpenApiJson(); + + SetupSuccessfulMerge(config, sources, openApiContent); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.True(response.Success); + Assert.Contains("Merge completed successfully", response.Message); + Assert.NotNull(response.Metrics); + Assert.Equal(2, response.Metrics.SourceFilesProcessed); + Assert.True(response.Metrics.OutputWritten); + } + + [Fact] + public async Task Merge_OutputUnchanged_ReturnsSuccessWithOutputWrittenFalse() + { + // Arrange + var request = new MergeRequest("test-bucket", "publicapi/"); + var config = CreateValidConfig(); + var sources = CreateDiscoveredSources(); + var openApiContent = CreateValidOpenApiJson(); + + SetupSuccessfulMerge(config, sources, openApiContent, outputWritten: false); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.True(response.Success); + Assert.Contains("unchanged", response.Message); + Assert.False(response.Metrics.OutputWritten); + } + + [Fact] + public async Task Merge_WithOutputBucketOverride_UsesOverrideBucket() + { + // Arrange + var request = new MergeRequest("input-bucket", "publicapi/", "output-bucket"); + var config = CreateValidConfig(); + var sources = CreateDiscoveredSources(); + var openApiContent = CreateValidOpenApiJson(); + + SetupSuccessfulMerge(config, sources, openApiContent); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.True(response.Success); + _mockConditionalWriter.Verify( + x => x.WriteIfChangedAsync("output-bucket", It.IsAny(), It.IsAny(), It.IsAny()), + Times.Once); + } + + [Fact] + public async Task Merge_EmitsSuccessMetrics() + { + // Arrange + var request = new MergeRequest("test-bucket", "publicapi/"); + var config = CreateValidConfig(); + var sources = CreateDiscoveredSources(); + var openApiContent = CreateValidOpenApiJson(); + + SetupSuccessfulMerge(config, sources, openApiContent); + + // Act + await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + _mockMetricsService.Verify( + x => x.EmitSuccessMetricsAsync( + "publicapi/", + It.IsAny(), + 2, + true, + It.IsAny()), + Times.Once); + } + + #endregion + + #region Error Handling Tests + + [Fact] + public async Task Merge_ConfigNotFound_ReturnsErrorResponse() + { + // Arrange + var request = new MergeRequest("test-bucket", "missing/"); + + _mockConfigLoader + .Setup(x => x.LoadConfigAsync("test-bucket", "missing/", It.IsAny())) + .ThrowsAsync(new ConfigNotFoundException("test-bucket", "missing/config.json")); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.False(response.Success); + Assert.Contains("Configuration file not found", response.Error); + } + + [Fact] + public async Task Merge_InvalidConfig_ReturnsErrorResponse() + { + // Arrange + var request = new MergeRequest("test-bucket", "api/"); + + _mockConfigLoader + .Setup(x => x.LoadConfigAsync("test-bucket", "api/", It.IsAny())) + .ThrowsAsync(new InvalidConfigException("test-bucket", "api/config.json", "Invalid JSON")); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.False(response.Success); + Assert.Contains("Invalid configuration", response.Error); + } + + [Fact] + public async Task Merge_NoSourcesFound_ReturnsErrorResponse() + { + // Arrange + var request = new MergeRequest("test-bucket", "empty/"); + var config = CreateValidConfig(); + + _mockConfigLoader + .Setup(x => x.LoadConfigAsync("test-bucket", "empty/", It.IsAny())) + .ReturnsAsync(config); + + _mockSourceDiscovery + .Setup(x => x.DiscoverSourcesAsync("test-bucket", "empty/", config, It.IsAny())) + .ReturnsAsync(new List()); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.False(response.Success); + Assert.Contains("No valid source files found", response.Error); + } + + [Fact] + public async Task Merge_AllSourcesInvalid_ReturnsErrorResponse() + { + // Arrange + var request = new MergeRequest("test-bucket", "api/"); + var config = CreateValidConfig(); + var sources = CreateDiscoveredSources(); + + _mockConfigLoader + .Setup(x => x.LoadConfigAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(config); + + _mockSourceDiscovery + .Setup(x => x.DiscoverSourcesAsync("test-bucket", "api/", config, It.IsAny())) + .ReturnsAsync(sources); + + // Return null for all source files (simulating invalid/missing files) + _mockS3Service + .Setup(x => x.ReadTextAsync("test-bucket", It.IsAny(), It.IsAny())) + .ReturnsAsync((string?)null); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.False(response.Success); + Assert.Contains("No valid OpenAPI documents could be loaded", response.Error); + } + + [Fact] + public async Task Merge_ConfigNotFound_EmitsFailureMetrics() + { + // Arrange + var request = new MergeRequest("test-bucket", "missing/"); + + _mockConfigLoader + .Setup(x => x.LoadConfigAsync("test-bucket", "missing/", It.IsAny())) + .ThrowsAsync(new ConfigNotFoundException("test-bucket", "missing/config.json")); + + // Act + await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + _mockMetricsService.Verify( + x => x.EmitFailureMetricsAsync( + "missing/", + It.IsAny(), + "ConfigNotFound", + It.IsAny()), + Times.Once); + } + + [Fact] + public async Task Merge_InvalidConfig_EmitsFailureMetrics() + { + // Arrange + var request = new MergeRequest("test-bucket", "api/"); + + _mockConfigLoader + .Setup(x => x.LoadConfigAsync("test-bucket", "api/", It.IsAny())) + .ThrowsAsync(new InvalidConfigException("test-bucket", "api/config.json", "Invalid JSON")); + + // Act + await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + _mockMetricsService.Verify( + x => x.EmitFailureMetricsAsync( + "api/", + It.IsAny(), + "InvalidConfig", + It.IsAny()), + Times.Once); + } + + #endregion + + #region Conditional Write Tests + + [Fact] + public async Task Merge_ContentChanged_WritesOutput() + { + // Arrange + var request = new MergeRequest("test-bucket", "api/"); + var config = CreateValidConfig(); + var sources = CreateDiscoveredSources(); + var openApiContent = CreateValidOpenApiJson(); + + SetupSuccessfulMerge(config, sources, openApiContent, outputWritten: true); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.True(response.Metrics.OutputWritten); + _mockConditionalWriter.Verify( + x => x.WriteIfChangedAsync( + "test-bucket", + "api/merged.json", + It.IsAny(), + It.IsAny()), + Times.Once); + } + + [Fact] + public async Task Merge_ContentUnchanged_SkipsWrite() + { + // Arrange + var request = new MergeRequest("test-bucket", "api/"); + var config = CreateValidConfig(); + var sources = CreateDiscoveredSources(); + var openApiContent = CreateValidOpenApiJson(); + + SetupSuccessfulMerge(config, sources, openApiContent, outputWritten: false); + + // Act + var response = await _mergeFunction.Merge(request, _mockContext.Object); + + // Assert + Assert.False(response.Metrics.OutputWritten); + } + + #endregion + + #region Helper Methods + + private LambdaMergeConfig CreateValidConfig() + { + return new LambdaMergeConfig + { + Info = new MergeInfoConfiguration + { + Title = "Test API", + Version = "1.0.0" + }, + AutoDiscover = true, + Output = "merged.json" + }; + } + + private IReadOnlyList CreateDiscoveredSources() + { + return new List + { + new DiscoveredSource("api/users.json", "users", null), + new DiscoveredSource("api/products.json", "products", null) + }; + } + + private string CreateValidOpenApiJson() + { + return """ + { + "openapi": "3.0.0", + "info": { + "title": "Test Service", + "version": "1.0.0" + }, + "paths": { + "/test": { + "get": { + "summary": "Test endpoint", + "responses": { + "200": { + "description": "Success" + } + } + } + } + } + } + """; + } + + private void SetupSuccessfulMerge( + LambdaMergeConfig config, + IReadOnlyList sources, + string openApiContent, + bool outputWritten = true) + { + _mockConfigLoader + .Setup(x => x.LoadConfigAsync(It.IsAny(), It.IsAny(), It.IsAny())) + .ReturnsAsync(config); + + _mockSourceDiscovery + .Setup(x => x.DiscoverSourcesAsync(It.IsAny(), It.IsAny(), config, It.IsAny())) + .ReturnsAsync(sources); + + _mockS3Service + .Setup(x => x.ReadTextAsync(It.IsAny(), It.IsAny(), It.IsAny())) + .ReturnsAsync(openApiContent); + + _mockConditionalWriter + .Setup(x => x.WriteIfChangedAsync(It.IsAny(), It.IsAny(), It.IsAny(), It.IsAny())) + .ReturnsAsync(new ConditionalWriteResult( + WasWritten: outputWritten, + OutputKey: $"{config.Output}", + Reason: outputWritten ? "Content changed" : "Content unchanged")); + + _mockMetricsService + .Setup(x => x.EmitSuccessMetricsAsync(It.IsAny(), It.IsAny(), It.IsAny(), It.IsAny(), It.IsAny())) + .Returns(Task.CompletedTask); + + _mockMetricsService + .Setup(x => x.EmitFailureMetricsAsync(It.IsAny(), It.IsAny(), It.IsAny(), It.IsAny())) + .Returns(Task.CompletedTask); + } + + #endregion +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/Oproto.Lambda.OpenApi.Merge.Lambda.Tests.csproj b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/Oproto.Lambda.OpenApi.Merge.Lambda.Tests.csproj new file mode 100644 index 0000000..c7ffffc --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/Oproto.Lambda.OpenApi.Merge.Lambda.Tests.csproj @@ -0,0 +1,35 @@ + + + + net8.0 + enable + enable + 12 + false + true + + + + + + + + + + + + runtime; build; native; contentfiles; analyzers; buildtransitive + all + + + runtime; build; native; contentfiles; analyzers; buildtransitive + all + + + + + + + + + diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/OutputComparisonPropertyTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/OutputComparisonPropertyTests.cs new file mode 100644 index 0000000..a132cff --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/OutputComparisonPropertyTests.cs @@ -0,0 +1,326 @@ +using FsCheck; +using FsCheck.Xunit; +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; +using System.Text.Json; +using System.Text.Json.Nodes; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Property-based tests for output comparison normalization. +/// Feature: lambda-merge-tool, Property 5: Output Comparison Normalization +/// **Validates: Requirements 5.2, 5.5** +/// +public class OutputComparisonPropertyTests +{ + /// + /// Generators for output comparison test data. + /// + private static class OutputComparisonGenerators + { + /// + /// Generates valid OpenAPI-like JSON objects with various structures. + /// + public static Gen OpenApiJsonGen() + { + return from title in Gen.Elements("Test API", "My API", "Sample API", "Product API") + from version in Gen.Elements("1.0.0", "2.0.0", "1.1.0", "3.0.0") + from pathCount in Gen.Choose(1, 3) + from paths in Gen.ListOf(pathCount, PathGen()) + from schemaCount in Gen.Choose(0, 2) + from schemas in Gen.ListOf(schemaCount, SchemaGen()) + select BuildOpenApiJson(title, version, paths.ToList(), schemas.ToList()); + } + + /// + /// Generates path entries. + /// + private static Gen<(string Path, string Method, string Summary)> PathGen() + { + return from path in Gen.Elements("/users", "/products", "/orders", "/items", "/api/v1/data") + from method in Gen.Elements("get", "post", "put", "delete") + from summary in Gen.Elements("Get resource", "Create resource", "Update resource", "Delete resource") + select (path, method, summary); + } + + /// + /// Generates schema entries. + /// + private static Gen<(string Name, string Type)> SchemaGen() + { + return from name in Gen.Elements("User", "Product", "Order", "Item", "Response") + from type in Gen.Elements("object", "string", "integer", "array") + select (name, type); + } + + private static JsonObject BuildOpenApiJson( + string title, + string version, + List<(string Path, string Method, string Summary)> paths, + List<(string Name, string Type)> schemas) + { + var doc = new JsonObject + { + ["openapi"] = "3.0.0", + ["info"] = new JsonObject + { + ["title"] = title, + ["version"] = version + } + }; + + // Add paths + var pathsObj = new JsonObject(); + foreach (var (path, method, summary) in paths) + { + if (!pathsObj.ContainsKey(path)) + { + pathsObj[path] = new JsonObject(); + } + var pathItem = pathsObj[path]!.AsObject(); + pathItem[method] = new JsonObject + { + ["summary"] = summary, + ["responses"] = new JsonObject + { + ["200"] = new JsonObject + { + ["description"] = "Success" + } + } + }; + } + doc["paths"] = pathsObj; + + // Add schemas if any + if (schemas.Count > 0) + { + var schemasObj = new JsonObject(); + foreach (var (name, type) in schemas) + { + schemasObj[name] = new JsonObject + { + ["type"] = type + }; + } + doc["components"] = new JsonObject + { + ["schemas"] = schemasObj + }; + } + + return doc; + } + + /// + /// Generates a test case with original JSON and a formatting variation. + /// + public static Gen TestCaseGen() + { + return from jsonObj in OpenApiJsonGen() + from variationType in Gen.Elements( + FormattingVariation.ReorderProperties, + FormattingVariation.ChangeWhitespace, + FormattingVariation.MinifyJson, + FormattingVariation.PrettyPrint) + let originalJson = jsonObj.ToJsonString(new JsonSerializerOptions { WriteIndented = true }) + let variedJson = ApplyVariation(jsonObj, variationType) + select new OutputComparisonTestCase(originalJson, variedJson, variationType); + } + + private static string ApplyVariation(JsonObject original, FormattingVariation variation) + { + return variation switch + { + FormattingVariation.ReorderProperties => ReorderProperties(original), + FormattingVariation.ChangeWhitespace => ChangeWhitespace(original), + FormattingVariation.MinifyJson => original.ToJsonString(new JsonSerializerOptions { WriteIndented = false }), + FormattingVariation.PrettyPrint => original.ToJsonString(new JsonSerializerOptions { WriteIndented = true }), + _ => original.ToJsonString() + }; + } + + private static string ReorderProperties(JsonObject original) + { + // Reverse the order of properties at the top level + var reversed = new JsonObject(); + var properties = original.ToList(); + properties.Reverse(); + foreach (var kvp in properties) + { + reversed[kvp.Key] = JsonNode.Parse(kvp.Value?.ToJsonString() ?? "null"); + } + return reversed.ToJsonString(new JsonSerializerOptions { WriteIndented = true }); + } + + private static string ChangeWhitespace(JsonObject original) + { + // Add extra whitespace by using different indentation + var json = original.ToJsonString(new JsonSerializerOptions { WriteIndented = true }); + // Add extra newlines and spaces + return json.Replace("\n", "\n\n").Replace(" ", " "); + } + } + + /// + /// Types of formatting variations to apply. + /// + public enum FormattingVariation + { + ReorderProperties, + ChangeWhitespace, + MinifyJson, + PrettyPrint + } + + /// + /// Test case for output comparison testing. + /// + public record OutputComparisonTestCase( + string OriginalJson, + string VariedJson, + FormattingVariation VariationType); + + + private readonly Mock> _mockLogger; + private readonly OutputComparer _outputComparer; + + public OutputComparisonPropertyTests() + { + _mockLogger = new Mock>(); + _outputComparer = new OutputComparer(_mockLogger.Object); + } + + /// + /// Feature: lambda-merge-tool, Property 5: Output Comparison Normalization + /// For any two OpenAPI documents that are semantically equivalent but differ only + /// in JSON formatting (whitespace, property order), the comparison function SHALL return true. + /// **Validates: Requirements 5.2, 5.5** + /// + [Property(MaxTest = 100)] + public Property SemanticallyEquivalentDocuments_WithFormattingDifferences_AreEqual() + { + return Prop.ForAll( + OutputComparisonGenerators.TestCaseGen().ToArbitrary(), + testCase => + { + // Act + var areEquivalent = _outputComparer.AreEquivalent(testCase.OriginalJson, testCase.VariedJson); + + // Assert - semantically equivalent documents should be equal + return areEquivalent.Label( + $"Documents with {testCase.VariationType} variation should be equivalent.\n" + + $"Original (first 200 chars): {Truncate(testCase.OriginalJson, 200)}\n" + + $"Varied (first 200 chars): {Truncate(testCase.VariedJson, 200)}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 5: Output Comparison Normalization + /// For any JSON document, normalizing it twice should produce the same result (idempotence). + /// **Validates: Requirements 5.2, 5.5** + /// + [Property(MaxTest = 100)] + public Property NormalizationIsIdempotent() + { + return Prop.ForAll( + OutputComparisonGenerators.OpenApiJsonGen().ToArbitrary(), + jsonObj => + { + // Arrange + var originalJson = jsonObj.ToJsonString(new JsonSerializerOptions { WriteIndented = true }); + + // Act + var normalized1 = _outputComparer.NormalizeJson(originalJson); + var normalized2 = _outputComparer.NormalizeJson(normalized1); + + // Assert - normalizing twice should produce the same result + var isIdempotent = normalized1 == normalized2; + + return isIdempotent.Label( + $"Normalization should be idempotent.\n" + + $"First normalization (first 200 chars): {Truncate(normalized1, 200)}\n" + + $"Second normalization (first 200 chars): {Truncate(normalized2, 200)}"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 5: Output Comparison Normalization + /// For any JSON document, the normalized output should have properties sorted alphabetically. + /// **Validates: Requirements 5.2, 5.5** + /// + [Property(MaxTest = 100)] + public Property NormalizedJsonHasSortedProperties() + { + return Prop.ForAll( + OutputComparisonGenerators.OpenApiJsonGen().ToArbitrary(), + jsonObj => + { + // Arrange + var originalJson = jsonObj.ToJsonString(new JsonSerializerOptions { WriteIndented = true }); + + // Act + var normalized = _outputComparer.NormalizeJson(originalJson); + var parsedNormalized = JsonNode.Parse(normalized)?.AsObject(); + + if (parsedNormalized == null) + { + return false.Label("Failed to parse normalized JSON"); + } + + // Assert - properties should be sorted alphabetically + var properties = parsedNormalized.Select(kvp => kvp.Key).ToList(); + var sortedProperties = properties.OrderBy(p => p, StringComparer.Ordinal).ToList(); + var isSorted = properties.SequenceEqual(sortedProperties); + + return isSorted.Label( + $"Properties should be sorted alphabetically.\n" + + $"Actual order: [{string.Join(", ", properties)}]\n" + + $"Expected order: [{string.Join(", ", sortedProperties)}]"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 5: Output Comparison Normalization + /// For any two different JSON documents, the comparison function SHALL return false. + /// **Validates: Requirements 5.2, 5.5** + /// + [Property(MaxTest = 100)] + public Property DifferentDocuments_AreNotEqual() + { + return Prop.ForAll( + OutputComparisonGenerators.OpenApiJsonGen().ToArbitrary(), + OutputComparisonGenerators.OpenApiJsonGen().ToArbitrary(), + (jsonObj1, jsonObj2) => + { + // Arrange + var json1 = jsonObj1.ToJsonString(); + var json2 = jsonObj2.ToJsonString(); + + // Skip if they happen to be the same + if (json1 == json2) + { + return true.Label("Skipped - documents are identical"); + } + + // Act + var areEquivalent = _outputComparer.AreEquivalent(json1, json2); + + // Assert - different documents should not be equal + // Note: This may occasionally fail if two different generated documents + // happen to be semantically equivalent, which is acceptable + return (!areEquivalent).Label( + $"Different documents should not be equivalent.\n" + + $"Doc1 (first 100 chars): {Truncate(json1, 100)}\n" + + $"Doc2 (first 100 chars): {Truncate(json2, 100)}"); + }); + } + + private static string Truncate(string s, int maxLength) + { + if (string.IsNullOrEmpty(s)) return s; + return s.Length <= maxLength ? s : s.Substring(0, maxLength) + "..."; + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/OutputPathConstructionPropertyTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/OutputPathConstructionPropertyTests.cs new file mode 100644 index 0000000..5596a39 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/OutputPathConstructionPropertyTests.cs @@ -0,0 +1,239 @@ +using FsCheck; +using FsCheck.Xunit; +using Oproto.Lambda.OpenApi.Merge.Cdk; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Property-based tests for output path construction. +/// Feature: lambda-merge-tool, Property 7: Output Path Construction +/// **Validates: Requirements 6.3, 6.5** +/// +public class OutputPathConstructionPropertyTests +{ + /// + /// Generators for output path test data. + /// + private static class OutputPathGenerators + { + /// + /// Generates valid prefix segments (directory names). + /// + public static Gen PrefixSegmentGen() + { + return Gen.Elements( + "publicapi", "internalapi", "api", "v1", "v2", "v3", + "users", "orders", "products", "services", "gateway", + "admin", "public", "private", "staging", "production"); + } + + /// + /// Generates valid simple output filenames (no path separators). + /// + public static Gen SimpleFilenameGen() + { + return Gen.Elements( + "merged.json", "openapi.json", "api.json", "output.json", + "combined.json", "spec.json", "merged-openapi.json", + "public-api.json", "internal-api.json"); + } + + /// + /// Generates a single-level prefix (e.g., "publicapi/"). + /// + public static Gen SingleLevelPrefixGen() + { + return from segment in PrefixSegmentGen() + select $"{segment}/"; + } + + /// + /// Generates a multi-level prefix (e.g., "internal/v2/"). + /// + public static Gen MultiLevelPrefixGen() + { + return from segmentCount in Gen.Choose(2, 4) + from segments in Gen.ListOf(segmentCount, PrefixSegmentGen()) + select string.Join("/", segments) + "/"; + } + + /// + /// Generates a prefix without trailing slash (to test normalization). + /// + public static Gen PrefixWithoutTrailingSlashGen() + { + return from segment in PrefixSegmentGen() + select segment; + } + + /// + /// Generates any valid prefix. + /// + public static Gen AnyPrefixGen() + { + return Gen.OneOf( + SingleLevelPrefixGen(), + MultiLevelPrefixGen(), + PrefixWithoutTrailingSlashGen()); + } + + /// + /// Generates a full path output (contains '/'). + /// + public static Gen FullPathOutputGen() + { + return from segment in PrefixSegmentGen() + from filename in SimpleFilenameGen() + select $"{segment}/{filename}"; + } + + /// + /// Generates an absolute path output (starts with '/'). + /// + public static Gen AbsolutePathOutputGen() + { + return from segment in PrefixSegmentGen() + from filename in SimpleFilenameGen() + select $"/{segment}/{filename}"; + } + } + + /// + /// Feature: lambda-merge-tool, Property 7: Output Path Construction + /// For any prefix and simple filename (no '/'), the output path SHALL be {prefix}/{filename}. + /// **Validates: Requirements 6.3, 6.5** + /// + [Property(MaxTest = 100)] + public Property SimpleFilename_IsRelativeToPrefix() + { + return Prop.ForAll( + OutputPathGenerators.AnyPrefixGen().ToArbitrary(), + OutputPathGenerators.SimpleFilenameGen().ToArbitrary(), + (prefix, filename) => + { + // Act + var outputPath = OutputPathHelper.ConstructOutputPath(prefix, filename); + + // Assert - path should be normalized prefix + filename + var normalizedPrefix = prefix.TrimEnd('/') + "/"; + var expectedPath = normalizedPrefix + filename; + + return (outputPath == expectedPath) + .Label($"Prefix: '{prefix}', Filename: '{filename}' -> Expected: '{expectedPath}', Actual: '{outputPath}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 7: Output Path Construction + /// For any full path output (contains '/'), the output path SHALL be the full path as-is. + /// **Validates: Requirements 6.3, 6.5** + /// + [Property(MaxTest = 100)] + public Property FullPathOutput_IsUsedAsIs() + { + return Prop.ForAll( + OutputPathGenerators.AnyPrefixGen().ToArbitrary(), + OutputPathGenerators.FullPathOutputGen().ToArbitrary(), + (prefix, fullPath) => + { + // Act + var outputPath = OutputPathHelper.ConstructOutputPath(prefix, fullPath); + + // Assert - full path should be used as-is (not prefixed) + return (outputPath == fullPath) + .Label($"Full path '{fullPath}' should be used as-is, not prefixed. Got: '{outputPath}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 7: Output Path Construction + /// For any absolute path output (starts with '/'), the output path SHALL be the path without leading slash. + /// **Validates: Requirements 6.3, 6.5** + /// + [Property(MaxTest = 100)] + public Property AbsolutePathOutput_HasLeadingSlashRemoved() + { + return Prop.ForAll( + OutputPathGenerators.AnyPrefixGen().ToArbitrary(), + OutputPathGenerators.AbsolutePathOutputGen().ToArbitrary(), + (prefix, absolutePath) => + { + // Act + var outputPath = OutputPathHelper.ConstructOutputPath(prefix, absolutePath); + + // Assert - leading slash should be removed + var expectedPath = absolutePath.TrimStart('/'); + return (outputPath == expectedPath) + .Label($"Absolute path '{absolutePath}' should have leading slash removed. Expected: '{expectedPath}', Got: '{outputPath}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 7: Output Path Construction + /// Output path construction SHALL be deterministic (same inputs always produce same output). + /// **Validates: Requirements 6.3, 6.5** + /// + [Property(MaxTest = 100)] + public Property ConstructOutputPath_IsDeterministic() + { + return Prop.ForAll( + OutputPathGenerators.AnyPrefixGen().ToArbitrary(), + OutputPathGenerators.SimpleFilenameGen().ToArbitrary(), + (prefix, filename) => + { + // Act - construct path multiple times + var result1 = OutputPathHelper.ConstructOutputPath(prefix, filename); + var result2 = OutputPathHelper.ConstructOutputPath(prefix, filename); + var result3 = OutputPathHelper.ConstructOutputPath(prefix, filename); + + // Assert - all results should be identical + return (result1 == result2 && result2 == result3) + .Label($"Construction should be deterministic for prefix '{prefix}' and filename '{filename}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 7: Output Path Construction + /// For simple filenames, extracting the prefix from constructed path SHALL return the normalized prefix. + /// **Validates: Requirements 6.3, 6.5** + /// + [Property(MaxTest = 100)] + public Property ExtractPrefix_FromSimpleFilename_ReturnsNormalizedPrefix() + { + return Prop.ForAll( + OutputPathGenerators.AnyPrefixGen().ToArbitrary(), + OutputPathGenerators.SimpleFilenameGen().ToArbitrary(), + (prefix, filename) => + { + // Act + var outputPath = OutputPathHelper.ConstructOutputPath(prefix, filename); + var extractedPrefix = OutputPathHelper.ExtractPrefix(outputPath); + + // Assert + var normalizedPrefix = prefix.TrimEnd('/') + "/"; + return (extractedPrefix == normalizedPrefix) + .Label($"Extracted prefix '{extractedPrefix}' should equal normalized prefix '{normalizedPrefix}' for path '{outputPath}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 7: Output Path Construction + /// Empty prefix with simple filename SHALL return just the filename. + /// **Validates: Requirements 6.3, 6.5** + /// + [Property(MaxTest = 100)] + public Property EmptyPrefix_ReturnsJustFilename() + { + return Prop.ForAll( + OutputPathGenerators.SimpleFilenameGen().ToArbitrary(), + filename => + { + // Act + var outputPath = OutputPathHelper.ConstructOutputPath("", filename); + + // Assert + return (outputPath == filename) + .Label($"Empty prefix with filename '{filename}' should return just the filename. Got: '{outputPath}'"); + }); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/PrefixExtractionPropertyTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/PrefixExtractionPropertyTests.cs new file mode 100644 index 0000000..123570d --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/PrefixExtractionPropertyTests.cs @@ -0,0 +1,222 @@ +using FsCheck; +using FsCheck.Xunit; +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Property-based tests for prefix extraction from S3 keys. +/// Feature: lambda-merge-tool, Property 1: Prefix Extraction Consistency +/// **Validates: Requirements 1.1, 1.2, 1.3, 1.4** +/// +public class PrefixExtractionPropertyTests +{ + /// + /// Generators for S3 key test data. + /// + private static class S3KeyGenerators + { + /// + /// Generates valid prefix segments (directory names). + /// + public static Gen PrefixSegmentGen() + { + return Gen.Elements( + "publicapi", "internalapi", "api", "v1", "v2", "v3", + "users", "orders", "products", "services", "gateway", + "admin", "public", "private", "staging", "production"); + } + + /// + /// Generates valid filenames. + /// + public static Gen FilenameGen() + { + return Gen.Elements( + "config.json", "openapi.json", "api.json", "merged.json", + "users-service.json", "orders-service.json", "products.json", + "service1.json", "service2.json", "spec.json"); + } + + /// + /// Generates a single-level prefix S3 key (e.g., "publicapi/config.json"). + /// + public static Gen<(string Key, string ExpectedPrefix, string ExpectedFilename)> SingleLevelKeyGen() + { + return from prefix in PrefixSegmentGen() + from filename in FilenameGen() + let key = $"{prefix}/{filename}" + let expectedPrefix = $"{prefix}/" + select (key, expectedPrefix, filename); + } + + /// + /// Generates a multi-level prefix S3 key (e.g., "internal/v2/service.json"). + /// + public static Gen<(string Key, string ExpectedPrefix, string ExpectedFilename)> MultiLevelKeyGen() + { + return from segmentCount in Gen.Choose(2, 4) + from segments in Gen.ListOf(segmentCount, PrefixSegmentGen()) + from filename in FilenameGen() + let prefix = string.Join("/", segments) + let key = $"{prefix}/{filename}" + let expectedPrefix = $"{prefix}/" + select (key, expectedPrefix, filename); + } + + /// + /// Generates root-level S3 keys (no prefix, e.g., "config.json"). + /// + public static Gen<(string Key, string ExpectedPrefix, string ExpectedFilename)> RootLevelKeyGen() + { + return from filename in FilenameGen() + select (filename, string.Empty, filename); + } + + /// + /// Generates any valid S3 key with its expected prefix. + /// + public static Gen<(string Key, string ExpectedPrefix, string ExpectedFilename)> AnyValidKeyGen() + { + return Gen.OneOf( + SingleLevelKeyGen(), + MultiLevelKeyGen(), + RootLevelKeyGen()); + } + } + + private readonly ConfigLoader _configLoader; + + public PrefixExtractionPropertyTests() + { + var mockS3Service = new Mock(); + var mockLogger = new Mock>(); + _configLoader = new ConfigLoader(mockS3Service.Object, mockLogger.Object); + } + + /// + /// Feature: lambda-merge-tool, Property 1: Prefix Extraction Consistency + /// For any valid S3 key, extracting the prefix SHALL return the directory path portion + /// before the filename. + /// **Validates: Requirements 1.1, 1.2, 1.3, 1.4** + /// + [Property(MaxTest = 100)] + public Property ExtractPrefix_ReturnsDirectoryPathBeforeFilename() + { + return Prop.ForAll( + S3KeyGenerators.AnyValidKeyGen().ToArbitrary(), + testCase => + { + var (key, expectedPrefix, _) = testCase; + + // Act + var actualPrefix = _configLoader.ExtractPrefix(key); + + // Assert + return (actualPrefix == expectedPrefix) + .Label($"Key: '{key}' -> Expected: '{expectedPrefix}', Actual: '{actualPrefix}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 1: Prefix Extraction Consistency + /// For any valid S3 key, prefix extraction SHALL be deterministic (same input always + /// produces same output). + /// **Validates: Requirements 1.1, 1.2, 1.3, 1.4** + /// + [Property(MaxTest = 100)] + public Property ExtractPrefix_IsDeterministic() + { + return Prop.ForAll( + S3KeyGenerators.AnyValidKeyGen().ToArbitrary(), + testCase => + { + var (key, _, _) = testCase; + + // Act - extract prefix multiple times + var result1 = _configLoader.ExtractPrefix(key); + var result2 = _configLoader.ExtractPrefix(key); + var result3 = _configLoader.ExtractPrefix(key); + + // Assert - all results should be identical + return (result1 == result2 && result2 == result3) + .Label($"Extraction should be deterministic for key: '{key}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 1: Prefix Extraction Consistency + /// For any valid S3 key with a prefix, the extracted prefix combined with the filename + /// SHALL reconstruct the original key. + /// **Validates: Requirements 1.1, 1.2, 1.3, 1.4** + /// + [Property(MaxTest = 100)] + public Property ExtractPrefix_PlusFIlename_ReconstructsOriginalKey() + { + return Prop.ForAll( + S3KeyGenerators.AnyValidKeyGen().ToArbitrary(), + testCase => + { + var (key, _, expectedFilename) = testCase; + + // Act + var prefix = _configLoader.ExtractPrefix(key); + var reconstructedKey = prefix + expectedFilename; + + // Assert + return (reconstructedKey == key) + .Label($"Reconstructed key '{reconstructedKey}' should equal original '{key}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 1: Prefix Extraction Consistency + /// For any S3 key with a prefix, the extracted prefix SHALL end with a forward slash. + /// **Validates: Requirements 1.1, 1.2, 1.3, 1.4** + /// + [Property(MaxTest = 100)] + public Property ExtractPrefix_WithPrefix_EndsWithSlash() + { + return Prop.ForAll( + Gen.OneOf( + S3KeyGenerators.SingleLevelKeyGen(), + S3KeyGenerators.MultiLevelKeyGen() + ).ToArbitrary(), + testCase => + { + var (key, _, _) = testCase; + + // Act + var prefix = _configLoader.ExtractPrefix(key); + + // Assert - non-empty prefix should end with / + return (prefix.EndsWith("/")) + .Label($"Prefix '{prefix}' should end with '/' for key '{key}'"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 1: Prefix Extraction Consistency + /// For root-level keys (no directory), the extracted prefix SHALL be empty. + /// **Validates: Requirements 1.1, 1.2, 1.3, 1.4** + /// + [Property(MaxTest = 100)] + public Property ExtractPrefix_RootLevelKey_ReturnsEmptyString() + { + return Prop.ForAll( + S3KeyGenerators.RootLevelKeyGen().ToArbitrary(), + testCase => + { + var (key, _, _) = testCase; + + // Act + var prefix = _configLoader.ExtractPrefix(key); + + // Assert + return (prefix == string.Empty) + .Label($"Root-level key '{key}' should have empty prefix, got '{prefix}'"); + }); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/S3ServiceTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/S3ServiceTests.cs new file mode 100644 index 0000000..8deb506 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/S3ServiceTests.cs @@ -0,0 +1,298 @@ +using System.Net; +using System.Text; +using System.Text.Json; +using Amazon.S3; +using Amazon.S3.Model; +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Unit tests for S3Service. +/// +public class S3ServiceTests +{ + private readonly Mock _mockS3Client; + private readonly Mock> _mockLogger; + private readonly S3Service _service; + + public S3ServiceTests() + { + _mockS3Client = new Mock(); + _mockLogger = new Mock>(); + _service = new S3Service(_mockS3Client.Object, _mockLogger.Object); + } + + #region ReadJsonAsync Tests + + [Fact] + public async Task ReadJsonAsync_ValidJson_DeserializesCorrectly() + { + // Arrange + var testObject = new TestModel { Name = "Test", Value = 42 }; + var json = JsonSerializer.Serialize(testObject, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }); + SetupGetObjectResponse("test-bucket", "test-key.json", json); + + // Act + var result = await _service.ReadJsonAsync("test-bucket", "test-key.json"); + + // Assert + Assert.NotNull(result); + Assert.Equal("Test", result.Name); + Assert.Equal(42, result.Value); + } + + [Fact] + public async Task ReadJsonAsync_ObjectNotFound_ReturnsNull() + { + // Arrange + SetupGetObjectNotFound("test-bucket", "missing-key.json"); + + // Act + var result = await _service.ReadJsonAsync("test-bucket", "missing-key.json"); + + // Assert + Assert.Null(result); + } + + + [Fact] + public async Task ReadJsonAsync_InvalidJson_ThrowsJsonException() + { + // Arrange + SetupGetObjectResponse("test-bucket", "invalid.json", "not valid json {{{"); + + // Act & Assert + await Assert.ThrowsAsync(() => + _service.ReadJsonAsync("test-bucket", "invalid.json")); + } + + #endregion + + #region ReadTextAsync Tests + + [Fact] + public async Task ReadTextAsync_ValidObject_ReturnsContent() + { + // Arrange + var content = "Hello, World!"; + SetupGetObjectResponse("test-bucket", "test.txt", content); + + // Act + var result = await _service.ReadTextAsync("test-bucket", "test.txt"); + + // Assert + Assert.Equal(content, result); + } + + [Fact] + public async Task ReadTextAsync_ObjectNotFound_ReturnsNull() + { + // Arrange + SetupGetObjectNotFound("test-bucket", "missing.txt"); + + // Act + var result = await _service.ReadTextAsync("test-bucket", "missing.txt"); + + // Assert + Assert.Null(result); + } + + #endregion + + #region WriteJsonAsync Tests + + [Fact] + public async Task WriteJsonAsync_ValidObject_SerializesAndWrites() + { + // Arrange + var testObject = new TestModel { Name = "Test", Value = 42 }; + PutObjectRequest? capturedRequest = null; + + _mockS3Client + .Setup(x => x.PutObjectAsync(It.IsAny(), It.IsAny())) + .Callback((req, _) => capturedRequest = req) + .ReturnsAsync(new PutObjectResponse()); + + // Act + await _service.WriteJsonAsync("test-bucket", "output.json", testObject); + + // Assert + Assert.NotNull(capturedRequest); + Assert.Equal("test-bucket", capturedRequest.BucketName); + Assert.Equal("output.json", capturedRequest.Key); + Assert.Equal("application/json", capturedRequest.ContentType); + Assert.Contains("\"name\"", capturedRequest.ContentBody); + Assert.Contains("\"value\"", capturedRequest.ContentBody); + } + + #endregion + + #region ListObjectsAsync Tests + + [Fact] + public async Task ListObjectsAsync_ReturnsAllKeys() + { + // Arrange + var response = new ListObjectsV2Response + { + S3Objects = new List + { + new() { Key = "prefix/file1.json" }, + new() { Key = "prefix/file2.json" }, + new() { Key = "prefix/file3.json" } + }, + IsTruncated = false + }; + + _mockS3Client + .Setup(x => x.ListObjectsV2Async(It.IsAny(), It.IsAny())) + .ReturnsAsync(response); + + // Act + var result = await _service.ListObjectsAsync("test-bucket", "prefix/"); + + // Assert + Assert.Equal(3, result.Count); + Assert.Contains("prefix/file1.json", result); + Assert.Contains("prefix/file2.json", result); + Assert.Contains("prefix/file3.json", result); + } + + + [Fact] + public async Task ListObjectsAsync_HandlesPagination() + { + // Arrange + var firstResponse = new ListObjectsV2Response + { + S3Objects = new List + { + new() { Key = "prefix/file1.json" }, + new() { Key = "prefix/file2.json" } + }, + IsTruncated = true, + NextContinuationToken = "token123" + }; + + var secondResponse = new ListObjectsV2Response + { + S3Objects = new List + { + new() { Key = "prefix/file3.json" } + }, + IsTruncated = false + }; + + var callCount = 0; + _mockS3Client + .Setup(x => x.ListObjectsV2Async(It.IsAny(), It.IsAny())) + .ReturnsAsync(() => callCount++ == 0 ? firstResponse : secondResponse); + + // Act + var result = await _service.ListObjectsAsync("test-bucket", "prefix/"); + + // Assert + Assert.Equal(3, result.Count); + Assert.Contains("prefix/file1.json", result); + Assert.Contains("prefix/file2.json", result); + Assert.Contains("prefix/file3.json", result); + } + + [Fact] + public async Task ListObjectsAsync_EmptyPrefix_ReturnsEmptyList() + { + // Arrange + var response = new ListObjectsV2Response + { + S3Objects = new List(), + IsTruncated = false + }; + + _mockS3Client + .Setup(x => x.ListObjectsV2Async(It.IsAny(), It.IsAny())) + .ReturnsAsync(response); + + // Act + var result = await _service.ListObjectsAsync("test-bucket", "nonexistent/"); + + // Assert + Assert.Empty(result); + } + + #endregion + + #region ExistsAsync Tests + + [Fact] + public async Task ExistsAsync_ObjectExists_ReturnsTrue() + { + // Arrange + _mockS3Client + .Setup(x => x.GetObjectMetadataAsync(It.IsAny(), It.IsAny())) + .ReturnsAsync(new GetObjectMetadataResponse()); + + // Act + var result = await _service.ExistsAsync("test-bucket", "existing.json"); + + // Assert + Assert.True(result); + } + + [Fact] + public async Task ExistsAsync_ObjectNotFound_ReturnsFalse() + { + // Arrange + _mockS3Client + .Setup(x => x.GetObjectMetadataAsync(It.IsAny(), It.IsAny())) + .ThrowsAsync(new AmazonS3Exception("Not Found") { StatusCode = HttpStatusCode.NotFound }); + + // Act + var result = await _service.ExistsAsync("test-bucket", "missing.json"); + + // Assert + Assert.False(result); + } + + #endregion + + #region Helper Methods + + private void SetupGetObjectResponse(string bucket, string key, string content) + { + var stream = new MemoryStream(Encoding.UTF8.GetBytes(content)); + var response = new GetObjectResponse + { + ResponseStream = stream + }; + + _mockS3Client + .Setup(x => x.GetObjectAsync( + It.Is(r => r.BucketName == bucket && r.Key == key), + It.IsAny())) + .ReturnsAsync(response); + } + + private void SetupGetObjectNotFound(string bucket, string key) + { + _mockS3Client + .Setup(x => x.GetObjectAsync( + It.Is(r => r.BucketName == bucket && r.Key == key), + It.IsAny())) + .ThrowsAsync(new AmazonS3Exception("Not Found") { StatusCode = HttpStatusCode.NotFound }); + } + + #endregion + + #region Test Models + + private class TestModel + { + public string Name { get; set; } = string.Empty; + public int Value { get; set; } + } + + #endregion +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/SourceDiscoveryTests.cs b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/SourceDiscoveryTests.cs new file mode 100644 index 0000000..cfaaf11 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda.Tests/SourceDiscoveryTests.cs @@ -0,0 +1,374 @@ +using Microsoft.Extensions.Logging; +using Moq; +using Oproto.Lambda.OpenApi.Merge; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Tests; + +/// +/// Unit tests for SourceDiscovery. +/// +public class SourceDiscoveryTests +{ + private readonly Mock _mockS3Service; + private readonly Mock> _mockLogger; + private readonly SourceDiscovery _sourceDiscovery; + + public SourceDiscoveryTests() + { + _mockS3Service = new Mock(); + _mockLogger = new Mock>(); + _sourceDiscovery = new SourceDiscovery(_mockS3Service.Object, _mockLogger.Object); + } + + #region Auto-Discover Mode Tests + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_FindsJsonFiles() + { + // Arrange + var keys = new List + { + "api/config.json", + "api/users.json", + "api/orders.json", + "api/products.json" + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json"); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Equal(3, result.Count); + Assert.Contains(result, s => s.Key == "api/users.json"); + Assert.Contains(result, s => s.Key == "api/orders.json"); + Assert.Contains(result, s => s.Key == "api/products.json"); + } + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_ExcludesConfigJson() + { + // Arrange + var keys = new List + { + "api/config.json", + "api/users.json" + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json"); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Single(result); + Assert.DoesNotContain(result, s => s.Key.EndsWith("config.json")); + } + + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_ExcludesOutputFile() + { + // Arrange + var keys = new List + { + "api/config.json", + "api/users.json", + "api/merged.json" // This is the output file + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json"); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Single(result); + Assert.DoesNotContain(result, s => s.Key == "api/merged.json"); + } + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_AppliesExcludePatterns() + { + // Arrange + var keys = new List + { + "api/config.json", + "api/users.json", + "api/users-draft.json", // Should be excluded by pattern + "api/orders.json", + "api/orders-draft.json" // Should be excluded by pattern + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json", new List { "*-draft.json" }); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Equal(2, result.Count); + Assert.Contains(result, s => s.Key == "api/users.json"); + Assert.Contains(result, s => s.Key == "api/orders.json"); + Assert.DoesNotContain(result, s => s.Key.Contains("-draft.json")); + } + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_ExcludesNonJsonFiles() + { + // Arrange + var keys = new List + { + "api/config.json", + "api/users.json", + "api/readme.md", + "api/config.yaml" + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json"); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Single(result); + Assert.All(result, s => Assert.EndsWith(".json", s.Key)); + } + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_ExcludesNestedDirectories() + { + // Arrange + var keys = new List + { + "api/config.json", + "api/users.json", + "api/nested/orders.json" // Should be excluded (nested) + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json"); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Single(result); + Assert.Equal("api/users.json", result[0].Key); + } + + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_SetsCorrectName() + { + // Arrange + var keys = new List + { + "api/config.json", + "api/users-service.json" + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "api/", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json"); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Single(result); + Assert.Equal("users-service", result[0].Name); + Assert.Null(result[0].ExplicitConfig); + } + + [Fact] + public async Task DiscoverSourcesAsync_AutoDiscover_EmptyPrefix_Works() + { + // Arrange + var keys = new List + { + "config.json", + "users.json", + "orders.json" + }; + + _mockS3Service + .Setup(x => x.ListObjectsAsync("test-bucket", "", It.IsAny())) + .ReturnsAsync(keys); + + var config = CreateAutoDiscoverConfig("merged.json"); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "", config); + + // Assert + Assert.Equal(2, result.Count); + Assert.Contains(result, s => s.Key == "users.json"); + Assert.Contains(result, s => s.Key == "orders.json"); + } + + #endregion + + #region Explicit Sources Mode Tests + + [Fact] + public async Task DiscoverSourcesAsync_ExplicitSources_ReturnsConfiguredSources() + { + // Arrange + var config = CreateExplicitSourcesConfig(new List + { + new() { Path = "users.json", Name = "Users" }, + new() { Path = "orders.json", Name = "Orders" } + }); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Equal(2, result.Count); + Assert.Equal("api/users.json", result[0].Key); + Assert.Equal("Users", result[0].Name); + Assert.Equal("api/orders.json", result[1].Key); + Assert.Equal("Orders", result[1].Name); + } + + [Fact] + public async Task DiscoverSourcesAsync_ExplicitSources_PreservesSourceConfiguration() + { + // Arrange + var sourceConfig = new SourceConfiguration + { + Path = "users.json", + Name = "Users", + PathPrefix = "/users", + OperationIdPrefix = "User" + }; + + var config = CreateExplicitSourcesConfig(new List { sourceConfig }); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Single(result); + Assert.NotNull(result[0].ExplicitConfig); + Assert.Equal("/users", result[0].ExplicitConfig!.PathPrefix); + Assert.Equal("User", result[0].ExplicitConfig!.OperationIdPrefix); + } + + + [Fact] + public async Task DiscoverSourcesAsync_ExplicitSources_DerivesNameFromFilename() + { + // Arrange + var config = CreateExplicitSourcesConfig(new List + { + new() { Path = "users-service.json" } // No explicit name + }); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert + Assert.Single(result); + Assert.Equal("users-service", result[0].Name); + } + + [Fact] + public async Task DiscoverSourcesAsync_ExplicitSources_EmptyPrefix_Works() + { + // Arrange + var config = CreateExplicitSourcesConfig(new List + { + new() { Path = "users.json", Name = "Users" } + }); + + // Act + var result = await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "", config); + + // Assert + Assert.Single(result); + Assert.Equal("users.json", result[0].Key); + } + + [Fact] + public async Task DiscoverSourcesAsync_ExplicitSources_DoesNotCallS3List() + { + // Arrange + var config = CreateExplicitSourcesConfig(new List + { + new() { Path = "users.json", Name = "Users" } + }); + + // Act + await _sourceDiscovery.DiscoverSourcesAsync("test-bucket", "api/", config); + + // Assert - S3 ListObjects should NOT be called for explicit sources + _mockS3Service.Verify( + x => x.ListObjectsAsync(It.IsAny(), It.IsAny(), It.IsAny()), + Times.Never); + } + + #endregion + + #region Helper Methods + + private static LambdaMergeConfig CreateAutoDiscoverConfig(string output, List? excludePatterns = null) + { + return new LambdaMergeConfig + { + AutoDiscover = true, + Output = output, + ExcludePatterns = excludePatterns ?? new List(), + Info = new MergeInfoConfiguration + { + Title = "Test API", + Version = "1.0.0" + } + }; + } + + private static LambdaMergeConfig CreateExplicitSourcesConfig(List sources) + { + return new LambdaMergeConfig + { + AutoDiscover = false, + Sources = sources, + Output = "merged.json", + Info = new MergeInfoConfiguration + { + Title = "Test API", + Version = "1.0.0" + } + }; + } + + #endregion +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/AssemblyInfo.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/AssemblyInfo.cs new file mode 100644 index 0000000..b19515f --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/AssemblyInfo.cs @@ -0,0 +1,5 @@ +using Amazon.Lambda.Core; +using Amazon.Lambda.Serialization.SystemTextJson; + +// Register the Lambda JSON serializer +[assembly: LambdaSerializer(typeof(DefaultLambdaJsonSerializer))] diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Exceptions.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Exceptions.cs new file mode 100644 index 0000000..b21ad94 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Exceptions.cs @@ -0,0 +1,164 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda; + +/// +/// Exception thrown when the configuration file is not found in S3. +/// +public class ConfigNotFoundException : Exception +{ + /// + /// The S3 bucket where the config was expected. + /// + public string Bucket { get; } + + /// + /// The S3 key where the config was expected. + /// + public string Key { get; } + + /// + /// Creates a new ConfigNotFoundException. + /// + /// The S3 bucket name. + /// The S3 object key. + public ConfigNotFoundException(string bucket, string key) + : base($"Configuration file not found at s3://{bucket}/{key}") + { + Bucket = bucket; + Key = key; + } +} + +/// +/// Exception thrown when the configuration file contains invalid JSON or is missing required fields. +/// +public class InvalidConfigException : Exception +{ + /// + /// The S3 bucket containing the invalid config. + /// + public string Bucket { get; } + + /// + /// The S3 key of the invalid config. + /// + public string Key { get; } + + /// + /// Creates a new InvalidConfigException. + /// + /// The S3 bucket name. + /// The S3 object key. + /// The error message describing what is invalid. + public InvalidConfigException(string bucket, string key, string message) + : base($"Invalid configuration at s3://{bucket}/{key}: {message}") + { + Bucket = bucket; + Key = key; + } + + /// + /// Creates a new InvalidConfigException with an inner exception. + /// + /// The S3 bucket name. + /// The S3 object key. + /// The error message describing what is invalid. + /// The inner exception. + public InvalidConfigException(string bucket, string key, string message, Exception innerException) + : base($"Invalid configuration at s3://{bucket}/{key}: {message}", innerException) + { + Bucket = bucket; + Key = key; + } +} + +/// +/// Exception thrown when no valid source files are found for merging. +/// +public class NoValidSourcesException : Exception +{ + /// + /// The S3 bucket that was searched. + /// + public string Bucket { get; } + + /// + /// The prefix that was searched. + /// + public string Prefix { get; } + + /// + /// Creates a new NoValidSourcesException. + /// + /// The S3 bucket name. + /// The prefix that was searched. + public NoValidSourcesException(string bucket, string prefix) + : base($"No valid source files found in s3://{bucket}/{prefix}") + { + Bucket = bucket; + Prefix = prefix; + } +} + +/// +/// Exception thrown when a merge conflict occurs that cannot be resolved. +/// +public class MergeConflictException : Exception +{ + /// + /// The type of conflict that occurred. + /// + public string ConflictType { get; } + + /// + /// Details about the conflict. + /// + public string Details { get; } + + /// + /// Creates a new MergeConflictException. + /// + /// The type of conflict (e.g., "Schema", "Path"). + /// Details about the conflict. + public MergeConflictException(string conflictType, string details) + : base($"Merge conflict ({conflictType}): {details}") + { + ConflictType = conflictType; + Details = details; + } +} + +/// +/// Exception thrown when an S3 operation fails. +/// +public class S3OperationException : Exception +{ + /// + /// The S3 bucket involved in the operation. + /// + public string Bucket { get; } + + /// + /// The S3 key involved in the operation. + /// + public string Key { get; } + + /// + /// The operation that failed. + /// + public string Operation { get; } + + /// + /// Creates a new S3OperationException. + /// + /// The operation that failed (e.g., "Read", "Write"). + /// The S3 bucket name. + /// The S3 object key. + /// The inner exception. + public S3OperationException(string operation, string bucket, string key, Exception innerException) + : base($"S3 {operation} operation failed for s3://{bucket}/{key}: {innerException.Message}", innerException) + { + Operation = operation; + Bucket = bucket; + Key = key; + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Functions/.gitkeep b/Oproto.Lambda.OpenApi.Merge.Lambda/Functions/.gitkeep new file mode 100644 index 0000000..35bafc8 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Functions/.gitkeep @@ -0,0 +1 @@ +# Placeholder for Functions folder diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Functions/MergeFunction.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Functions/MergeFunction.cs new file mode 100644 index 0000000..6fafdd4 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Functions/MergeFunction.cs @@ -0,0 +1,392 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Functions; + +using System.Diagnostics; +using System.Text.Json; +using Amazon.Lambda.Annotations; +using Amazon.Lambda.Core; +using Microsoft.Extensions.Logging; +using Microsoft.OpenApi; +using Microsoft.OpenApi.Extensions; +using Microsoft.OpenApi.Models; +using Microsoft.OpenApi.Readers; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +/// +/// Lambda function for merging OpenAPI specifications from S3. +/// +public class MergeFunction +{ + private readonly IS3Service _s3Service; + private readonly IConfigLoader _configLoader; + private readonly ISourceDiscovery _sourceDiscovery; + private readonly IConditionalWriter _conditionalWriter; + private readonly IMetricsService _metricsService; + private readonly ILogger _logger; + + /// + /// Initializes a new instance of the MergeFunction. + /// + /// The S3 service for reading/writing files. + /// The config loader for loading merge configuration. + /// The source discovery service. + /// The conditional writer for output. + /// The metrics service for CloudWatch. + /// The logger. + public MergeFunction( + IS3Service s3Service, + IConfigLoader configLoader, + ISourceDiscovery sourceDiscovery, + IConditionalWriter conditionalWriter, + IMetricsService metricsService, + ILogger logger) + { + _s3Service = s3Service ?? throw new ArgumentNullException(nameof(s3Service)); + _configLoader = configLoader ?? throw new ArgumentNullException(nameof(configLoader)); + _sourceDiscovery = sourceDiscovery ?? throw new ArgumentNullException(nameof(sourceDiscovery)); + _conditionalWriter = conditionalWriter ?? throw new ArgumentNullException(nameof(conditionalWriter)); + _metricsService = metricsService ?? throw new ArgumentNullException(nameof(metricsService)); + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + /// Handles the merge request from Step Functions. + /// + /// The merge request containing bucket and prefix information. + /// The Lambda context. + /// The merge response with metrics and status. + [LambdaFunction(ResourceName = "Merge")] + public async Task Merge( + MergeRequest request, + ILambdaContext context) + { + var stopwatch = Stopwatch.StartNew(); + var warnings = new List(); + + _logger.LogInformation( + "Starting merge operation for s3://{Bucket}/{Prefix}", + request.InputBucket, + request.Prefix); + + try + { + // 1. Load configuration + var config = await _configLoader.LoadConfigAsync( + request.InputBucket, + request.Prefix); + + // Determine output bucket (use request override, then config, then input bucket) + var outputBucket = request.OutputBucket ?? config.OutputBucket ?? request.InputBucket; + + // 2. Discover source files + var sources = await _sourceDiscovery.DiscoverSourcesAsync( + request.InputBucket, + request.Prefix, + config); + + if (sources.Count == 0) + { + return CreateErrorResponse( + "No valid source files found", + stopwatch.ElapsedMilliseconds); + } + + _logger.LogInformation("Discovered {Count} source files", sources.Count); + + // 3. Load and parse source documents + var documents = new List<(SourceConfiguration Source, OpenApiDocument Document)>(); + foreach (var source in sources) + { + var document = await LoadOpenApiDocumentAsync( + request.InputBucket, + source.Key, + warnings); + + if (document != null) + { + // Create source configuration for the merger + var sourceConfig = source.ExplicitConfig ?? new SourceConfiguration + { + Path = source.Key, + Name = source.Name + }; + + documents.Add((sourceConfig, document)); + } + } + + if (documents.Count == 0) + { + return CreateErrorResponse( + "No valid OpenAPI documents could be loaded", + stopwatch.ElapsedMilliseconds, + warnings); + } + + _logger.LogInformation("Loaded {Count} valid OpenAPI documents", documents.Count); + + // 4. Perform merge + var merger = new OpenApiMerger(); + var mergeResult = merger.Merge(config, documents); + + // Add merge warnings to our warnings list + foreach (var warning in mergeResult.Warnings) + { + warnings.Add(warning.ToString()); + } + + if (!mergeResult.Success) + { + return CreateErrorResponse( + "Merge operation failed", + stopwatch.ElapsedMilliseconds, + warnings); + } + + // 5. Serialize merged document + var mergedJson = SerializeOpenApiDocument(mergeResult.Document); + + // 6. Build output key + var outputKey = BuildOutputKey(request.Prefix, config.Output); + + // 7. Write output conditionally (only if changed) + var writeResult = await _conditionalWriter.WriteIfChangedAsync( + outputBucket, + outputKey, + mergedJson); + + stopwatch.Stop(); + + // 8. Build metrics + var metrics = new MergeMetrics( + SourceFilesProcessed: documents.Count, + SchemasMergedCount: mergeResult.Document.Components?.Schemas?.Count ?? 0, + PathsMergedCount: mergeResult.Document.Paths?.Count ?? 0, + DurationMs: stopwatch.ElapsedMilliseconds, + OutputWritten: writeResult.WasWritten, + OutputKey: writeResult.OutputKey); + + var message = writeResult.WasWritten + ? $"Merge completed successfully. Output written to s3://{outputBucket}/{outputKey}" + : $"Merge completed successfully. Output unchanged at s3://{outputBucket}/{outputKey}"; + + _logger.LogInformation( + "Merge completed: {SourceCount} sources, {SchemaCount} schemas, {PathCount} paths, {Duration}ms, Written={Written}", + metrics.SourceFilesProcessed, + metrics.SchemasMergedCount, + metrics.PathsMergedCount, + metrics.DurationMs, + metrics.OutputWritten); + + // Emit success metrics to CloudWatch + await _metricsService.EmitSuccessMetricsAsync( + request.Prefix, + metrics.DurationMs, + metrics.SourceFilesProcessed, + metrics.OutputWritten); + + return new MergeResponse( + Success: true, + Message: message, + Metrics: metrics, + Warnings: warnings.Count > 0 ? warnings : null); + } + catch (ConfigNotFoundException ex) + { + _logger.LogError(ex, "Configuration not found"); + await _metricsService.EmitFailureMetricsAsync( + request.Prefix, + stopwatch.ElapsedMilliseconds, + "ConfigNotFound"); + return CreateErrorResponse( + ex.Message, + stopwatch.ElapsedMilliseconds); + } + catch (InvalidConfigException ex) + { + _logger.LogError(ex, "Invalid configuration"); + await _metricsService.EmitFailureMetricsAsync( + request.Prefix, + stopwatch.ElapsedMilliseconds, + "InvalidConfig"); + return CreateErrorResponse( + ex.Message, + stopwatch.ElapsedMilliseconds); + } + catch (NoValidSourcesException ex) + { + _logger.LogError(ex, "No valid sources found"); + await _metricsService.EmitFailureMetricsAsync( + request.Prefix, + stopwatch.ElapsedMilliseconds, + "NoValidSources"); + return CreateErrorResponse( + ex.Message, + stopwatch.ElapsedMilliseconds, + warnings); + } + catch (MergeConflictException ex) + { + _logger.LogError(ex, "Merge conflict occurred"); + await _metricsService.EmitFailureMetricsAsync( + request.Prefix, + stopwatch.ElapsedMilliseconds, + "MergeConflict"); + return CreateErrorResponse( + ex.Message, + stopwatch.ElapsedMilliseconds, + warnings); + } + catch (S3OperationException ex) + { + _logger.LogError(ex, "S3 operation failed"); + await _metricsService.EmitFailureMetricsAsync( + request.Prefix, + stopwatch.ElapsedMilliseconds, + "S3Error"); + return CreateErrorResponse( + ex.Message, + stopwatch.ElapsedMilliseconds, + warnings); + } + catch (Amazon.S3.AmazonS3Exception ex) + { + _logger.LogError(ex, "S3 error during merge operation"); + await _metricsService.EmitFailureMetricsAsync( + request.Prefix, + stopwatch.ElapsedMilliseconds, + $"S3_{ex.ErrorCode}"); + var errorMessage = ex.ErrorCode switch + { + "AccessDenied" => $"Access denied to S3 resource: {ex.Message}", + "NoSuchBucket" => $"Bucket not found: {ex.Message}", + "NoSuchKey" => $"Object not found: {ex.Message}", + _ => $"S3 error: {ex.Message}" + }; + return CreateErrorResponse( + errorMessage, + stopwatch.ElapsedMilliseconds, + warnings); + } + catch (Exception ex) + { + _logger.LogError(ex, "Unexpected error during merge operation"); + await _metricsService.EmitFailureMetricsAsync( + request.Prefix, + stopwatch.ElapsedMilliseconds, + "UnexpectedError"); + return CreateErrorResponse( + $"Unexpected error: {ex.Message}", + stopwatch.ElapsedMilliseconds, + warnings); + } + } + + /// + /// Loads and parses an OpenAPI document from S3. + /// + private async Task LoadOpenApiDocumentAsync( + string bucket, + string key, + List warnings) + { + _logger.LogDebug("Loading OpenAPI document from s3://{Bucket}/{Key}", bucket, key); + + try + { + var content = await _s3Service.ReadTextAsync(bucket, key); + + if (content == null) + { + var warning = $"Source file not found: s3://{bucket}/{key}"; + _logger.LogWarning(warning); + warnings.Add(warning); + return null; + } + + var reader = new OpenApiStringReader(); + var document = reader.Read(content, out var diagnostic); + + if (diagnostic.Errors.Count > 0) + { + var errorMessages = string.Join("; ", diagnostic.Errors.Select(e => e.Message)); + var warning = $"Invalid OpenAPI document at s3://{bucket}/{key}: {errorMessages}"; + _logger.LogWarning(warning); + warnings.Add(warning); + return null; + } + + return document; + } + catch (Exception ex) + { + var warning = $"Failed to load source file s3://{bucket}/{key}: {ex.Message}"; + _logger.LogWarning(ex, "Failed to load source file s3://{Bucket}/{Key}", bucket, key); + warnings.Add(warning); + return null; + } + } + + /// + /// Serializes an OpenAPI document to JSON. + /// + private static string SerializeOpenApiDocument(OpenApiDocument document) + { + return document.SerializeAsJson(OpenApiSpecVersion.OpenApi3_0); + } + + /// + /// Builds the full S3 key for the output file. + /// If output starts with '/' or contains '/', it's treated as an absolute/full path. + /// Otherwise, it's relative to the prefix. + /// + private static string BuildOutputKey(string prefix, string output) + { + // If output starts with '/', treat it as absolute (remove leading slash for S3) + if (output.StartsWith("/")) + { + return output.TrimStart('/'); + } + + // If output contains '/', treat it as a full path (not relative to prefix) + if (output.Contains("/")) + { + return output; + } + + // Otherwise, it's a simple filename relative to the prefix + if (string.IsNullOrEmpty(prefix)) + { + return output; + } + + // Ensure prefix ends with / + if (!prefix.EndsWith("/")) + { + prefix += "/"; + } + + return prefix + output; + } + + /// + /// Creates an error response with the given message. + /// + private static MergeResponse CreateErrorResponse( + string error, + long durationMs, + List? warnings = null) + { + return new MergeResponse( + Success: false, + Message: "Merge operation failed", + Metrics: new MergeMetrics( + SourceFilesProcessed: 0, + SchemasMergedCount: 0, + PathsMergedCount: 0, + DurationMs: durationMs, + OutputWritten: false), + Warnings: warnings?.Count > 0 ? warnings : null, + Error: error); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Models/LambdaMergeConfig.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/LambdaMergeConfig.cs new file mode 100644 index 0000000..48d7e9a --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/LambdaMergeConfig.cs @@ -0,0 +1,17 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +using System.Text.Json.Serialization; +using Oproto.Lambda.OpenApi.Merge; + +/// +/// Lambda-specific merge configuration that extends the base MergeConfiguration. +/// +public class LambdaMergeConfig : MergeConfiguration +{ + /// + /// Output bucket name. If not specified, uses input bucket. + /// Only applicable in Lambda context. + /// + [JsonPropertyName("outputBucket")] + public string? OutputBucket { get; set; } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeMetrics.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeMetrics.cs new file mode 100644 index 0000000..f6cb7c1 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeMetrics.cs @@ -0,0 +1,20 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +using System.Text.Json.Serialization; + +/// +/// Metrics from a merge operation. +/// +/// Number of source files that were processed. +/// Number of schemas merged. +/// Number of paths merged. +/// Duration of the merge operation in milliseconds. +/// Whether the output file was written (false if unchanged). +/// The S3 key of the output file, if written. +public record MergeMetrics( + [property: JsonPropertyName("sourceFilesProcessed")] int SourceFilesProcessed, + [property: JsonPropertyName("schemasMergedCount")] int SchemasMergedCount, + [property: JsonPropertyName("pathsMergedCount")] int PathsMergedCount, + [property: JsonPropertyName("durationMs")] long DurationMs, + [property: JsonPropertyName("outputWritten")] bool OutputWritten, + [property: JsonPropertyName("outputKey")] string? OutputKey = null); diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeRequest.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeRequest.cs new file mode 100644 index 0000000..0243c4f --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeRequest.cs @@ -0,0 +1,14 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +using System.Text.Json.Serialization; + +/// +/// Request model for the merge Lambda function. +/// +/// The S3 bucket containing input files. +/// The API prefix within the bucket. +/// Optional output bucket. Defaults to InputBucket if not specified. +public record MergeRequest( + [property: JsonPropertyName("inputBucket")] string InputBucket, + [property: JsonPropertyName("prefix")] string Prefix, + [property: JsonPropertyName("outputBucket")] string? OutputBucket = null); diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeResponse.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeResponse.cs new file mode 100644 index 0000000..3799b95 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Models/MergeResponse.cs @@ -0,0 +1,19 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +using System.Collections.Generic; +using System.Text.Json.Serialization; + +/// +/// Response model from the merge Lambda function. +/// +/// Whether the merge operation succeeded. +/// A human-readable message describing the result. +/// Metrics from the merge operation. +/// Optional list of warnings encountered during merge. +/// Error details if the merge failed. +public record MergeResponse( + [property: JsonPropertyName("success")] bool Success, + [property: JsonPropertyName("message")] string Message, + [property: JsonPropertyName("metrics")] MergeMetrics Metrics, + [property: JsonPropertyName("warnings")] IReadOnlyList? Warnings = null, + [property: JsonPropertyName("error")] string? Error = null); diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Oproto.Lambda.OpenApi.Merge.Lambda.csproj b/Oproto.Lambda.OpenApi.Merge.Lambda/Oproto.Lambda.OpenApi.Merge.Lambda.csproj new file mode 100644 index 0000000..14169dc --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Oproto.Lambda.OpenApi.Merge.Lambda.csproj @@ -0,0 +1,42 @@ + + + + net8.0 + enable + enable + 12 + true + Lambda + false + + + true + $(NoWarn);CS1591 + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/.gitkeep b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/.gitkeep new file mode 100644 index 0000000..db27dd0 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/.gitkeep @@ -0,0 +1 @@ +# Placeholder for Services folder diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ConditionalWriter.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ConditionalWriter.cs new file mode 100644 index 0000000..1cb77dd --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ConditionalWriter.cs @@ -0,0 +1,87 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using Microsoft.Extensions.Logging; + +/// +/// Service for conditional write operations that skip writes when content is unchanged. +/// Implements semantic comparison to ignore formatting differences. +/// +public class ConditionalWriter : IConditionalWriter +{ + private readonly IS3Service _s3Service; + private readonly IOutputComparer _outputComparer; + private readonly ILogger _logger; + + /// + /// Initializes a new instance of the ConditionalWriter. + /// + /// The S3 service. + /// The output comparer. + /// The logger. + public ConditionalWriter( + IS3Service s3Service, + IOutputComparer outputComparer, + ILogger logger) + { + _s3Service = s3Service ?? throw new ArgumentNullException(nameof(s3Service)); + _outputComparer = outputComparer ?? throw new ArgumentNullException(nameof(outputComparer)); + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + public async Task WriteIfChangedAsync( + string bucket, + string key, + string newContent, + CancellationToken ct = default) + { + if (string.IsNullOrEmpty(bucket)) + throw new ArgumentNullException(nameof(bucket)); + if (string.IsNullOrEmpty(key)) + throw new ArgumentNullException(nameof(key)); + if (newContent == null) + throw new ArgumentNullException(nameof(newContent)); + + _logger.LogDebug("Checking if write is needed for s3://{Bucket}/{Key}", bucket, key); + + // Try to read existing content + string? existingContent = null; + try + { + existingContent = await _s3Service.ReadTextAsync(bucket, key, ct); + } + catch (Exception ex) + { + _logger.LogWarning(ex, "Could not read existing content from s3://{Bucket}/{Key}, will write new content", bucket, key); + } + + // If no existing content, always write + if (existingContent == null) + { + _logger.LogDebug("No existing content found, writing new content to s3://{Bucket}/{Key}", bucket, key); + await _s3Service.WriteTextAsync(bucket, key, newContent, "application/json", ct); + return new ConditionalWriteResult( + WasWritten: true, + OutputKey: key, + Reason: "File did not exist"); + } + + // Compare content semantically + if (_outputComparer.AreEquivalent(existingContent, newContent)) + { + _logger.LogInformation("Content unchanged, skipping write to s3://{Bucket}/{Key}", bucket, key); + return new ConditionalWriteResult( + WasWritten: false, + OutputKey: key, + Reason: "Content unchanged"); + } + + // Content differs, write new content + _logger.LogDebug("Content changed, writing to s3://{Bucket}/{Key}", bucket, key); + await _s3Service.WriteTextAsync(bucket, key, newContent, "application/json", ct); + return new ConditionalWriteResult( + WasWritten: true, + OutputKey: key, + Reason: "Content changed"); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ConfigLoader.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ConfigLoader.cs new file mode 100644 index 0000000..d0f9967 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ConfigLoader.cs @@ -0,0 +1,151 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using System.Text.Json; +using Microsoft.Extensions.Logging; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +/// +/// Loads and validates merge configuration from S3. +/// +public class ConfigLoader : IConfigLoader +{ + private const string ConfigFileName = "config.json"; + + private readonly IS3Service _s3Service; + private readonly ILogger _logger; + + /// + /// Initializes a new instance of the ConfigLoader. + /// + /// The S3 service for reading files. + /// The logger. + public ConfigLoader(IS3Service s3Service, ILogger logger) + { + _s3Service = s3Service ?? throw new ArgumentNullException(nameof(s3Service)); + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + public async Task LoadConfigAsync(string bucket, string prefix, CancellationToken ct = default) + { + var configKey = GetConfigKey(prefix); + _logger.LogInformation("Loading configuration from s3://{Bucket}/{Key}", bucket, configKey); + + // Read the config file content + string? content; + try + { + content = await _s3Service.ReadTextAsync(bucket, configKey, ct); + } + catch (Exception ex) when (ex is not OperationCanceledException) + { + _logger.LogError(ex, "Failed to read configuration from s3://{Bucket}/{Key}", bucket, configKey); + throw; + } + + // Check if config exists + if (content == null) + { + _logger.LogError("Configuration file not found at s3://{Bucket}/{Key}", bucket, configKey); + throw new ConfigNotFoundException(bucket, configKey); + } + + // Parse the JSON + LambdaMergeConfig config; + try + { + config = JsonSerializer.Deserialize(content) + ?? throw new InvalidConfigException(bucket, configKey, "Deserialization returned null"); + } + catch (JsonException ex) + { + _logger.LogError(ex, "Invalid JSON in configuration file s3://{Bucket}/{Key}", bucket, configKey); + throw new InvalidConfigException(bucket, configKey, $"Invalid JSON: {ex.Message}", ex); + } + + // Validate required fields + ValidateConfig(bucket, configKey, config); + + _logger.LogInformation( + "Successfully loaded configuration: Title={Title}, Version={Version}, AutoDiscover={AutoDiscover}", + config.Info.Title, + config.Info.Version, + config.AutoDiscover); + + return config; + } + + /// + public string ExtractPrefix(string key) + { + if (string.IsNullOrEmpty(key)) + { + return string.Empty; + } + + // Find the last slash to separate the filename from the path + var lastSlashIndex = key.LastIndexOf('/'); + + if (lastSlashIndex < 0) + { + // No slash found - file is at root level, no prefix + return string.Empty; + } + + // Return everything up to and including the last slash + return key.Substring(0, lastSlashIndex + 1); + } + + /// + /// Gets the config file key for a given prefix. + /// + private static string GetConfigKey(string prefix) + { + // Ensure prefix ends with / if not empty + if (!string.IsNullOrEmpty(prefix) && !prefix.EndsWith("/")) + { + prefix += "/"; + } + + return $"{prefix}{ConfigFileName}"; + } + + /// + /// Validates the configuration has all required fields. + /// + private void ValidateConfig(string bucket, string key, LambdaMergeConfig config) + { + var errors = new List(); + + // Info.Title is required + if (string.IsNullOrWhiteSpace(config.Info.Title)) + { + errors.Add("Missing required field: info.title"); + } + + // Info.Version is required + if (string.IsNullOrWhiteSpace(config.Info.Version)) + { + errors.Add("Missing required field: info.version"); + } + + // Output is required + if (string.IsNullOrWhiteSpace(config.Output)) + { + errors.Add("Missing required field: output"); + } + + // If autoDiscover is false, sources must be specified + if (!config.AutoDiscover && (config.Sources == null || config.Sources.Count == 0)) + { + errors.Add("No sources specified and autoDiscover is disabled"); + } + + if (errors.Count > 0) + { + var errorMessage = string.Join("; ", errors); + _logger.LogError("Configuration validation failed for s3://{Bucket}/{Key}: {Errors}", bucket, key, errorMessage); + throw new InvalidConfigException(bucket, key, errorMessage); + } + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IConditionalWriter.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IConditionalWriter.cs new file mode 100644 index 0000000..821158e --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IConditionalWriter.cs @@ -0,0 +1,29 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +/// +/// Result of a conditional write operation. +/// +public record ConditionalWriteResult( + bool WasWritten, + string? OutputKey, + string Reason); + +/// +/// Interface for conditional write operations that skip writes when content is unchanged. +/// +public interface IConditionalWriter +{ + /// + /// Writes content to S3 only if it differs from the existing content. + /// + /// The S3 bucket name. + /// The S3 object key. + /// The new content to write. + /// Cancellation token. + /// Result indicating whether the write occurred and why. + Task WriteIfChangedAsync( + string bucket, + string key, + string newContent, + CancellationToken ct = default); +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IConfigLoader.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IConfigLoader.cs new file mode 100644 index 0000000..1acb0e3 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IConfigLoader.cs @@ -0,0 +1,27 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +/// +/// Interface for loading merge configuration from S3. +/// +public interface IConfigLoader +{ + /// + /// Loads and validates the merge configuration from S3. + /// + /// The S3 bucket name. + /// The API prefix (e.g., "publicapi/"). + /// Cancellation token. + /// The loaded and validated configuration. + /// Thrown when the config file does not exist. + /// Thrown when the config file contains invalid JSON. + Task LoadConfigAsync(string bucket, string prefix, CancellationToken ct = default); + + /// + /// Extracts the API prefix from an S3 object key. + /// + /// The S3 object key (e.g., "publicapi/config.json" or "internal/v2/service.json"). + /// The extracted prefix (e.g., "publicapi/" or "internal/v2/"). + string ExtractPrefix(string key); +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IMetricsService.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IMetricsService.cs new file mode 100644 index 0000000..634df01 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IMetricsService.cs @@ -0,0 +1,35 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +/// +/// Interface for emitting CloudWatch metrics. +/// +public interface IMetricsService +{ + /// + /// Emits metrics for a successful merge operation. + /// + /// The API prefix that was merged. + /// Duration of the merge operation in milliseconds. + /// Number of source files processed. + /// Whether the output was written (vs unchanged). + /// Cancellation token. + Task EmitSuccessMetricsAsync( + string prefix, + long durationMs, + int filesProcessed, + bool outputWritten, + CancellationToken ct = default); + + /// + /// Emits metrics for a failed merge operation. + /// + /// The API prefix that was being merged. + /// Duration before failure in milliseconds. + /// The type of error that occurred. + /// Cancellation token. + Task EmitFailureMetricsAsync( + string prefix, + long durationMs, + string errorType, + CancellationToken ct = default); +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IOutputComparer.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IOutputComparer.cs new file mode 100644 index 0000000..38544dd --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IOutputComparer.cs @@ -0,0 +1,22 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +/// +/// Interface for comparing OpenAPI document outputs. +/// +public interface IOutputComparer +{ + /// + /// Normalizes JSON content for comparison by sorting keys and using consistent formatting. + /// + /// The JSON content to normalize. + /// Normalized JSON string. + string NormalizeJson(string json); + + /// + /// Compares two JSON strings for semantic equality, ignoring formatting differences. + /// + /// First JSON string. + /// Second JSON string. + /// True if the JSON documents are semantically equivalent, false otherwise. + bool AreEquivalent(string json1, string json2); +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IS3Service.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IS3Service.cs new file mode 100644 index 0000000..53da5d6 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/IS3Service.cs @@ -0,0 +1,64 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +/// +/// Interface for S3 operations used by the merge Lambda function. +/// +public interface IS3Service +{ + /// + /// Reads a JSON file from S3 and deserializes it. + /// + /// The type to deserialize to. + /// The S3 bucket name. + /// The S3 object key. + /// Cancellation token. + /// The deserialized object, or null if the object doesn't exist. + Task ReadJsonAsync(string bucket, string key, CancellationToken ct = default) where T : class; + + /// + /// Reads raw text content from S3. + /// + /// The S3 bucket name. + /// The S3 object key. + /// Cancellation token. + /// The text content, or null if the object doesn't exist. + Task ReadTextAsync(string bucket, string key, CancellationToken ct = default); + + /// + /// Writes JSON content to S3. + /// + /// The type to serialize. + /// The S3 bucket name. + /// The S3 object key. + /// The content to serialize and write. + /// Cancellation token. + Task WriteJsonAsync(string bucket, string key, T content, CancellationToken ct = default); + + /// + /// Writes raw text content to S3. + /// + /// The S3 bucket name. + /// The S3 object key. + /// The text content to write. + /// The content type (default: application/json). + /// Cancellation token. + Task WriteTextAsync(string bucket, string key, string content, string contentType = "application/json", CancellationToken ct = default); + + /// + /// Lists objects with a given prefix. + /// + /// The S3 bucket name. + /// The prefix to filter objects by. + /// Cancellation token. + /// A list of object keys matching the prefix. + Task> ListObjectsAsync(string bucket, string prefix, CancellationToken ct = default); + + /// + /// Checks if an object exists in S3. + /// + /// The S3 bucket name. + /// The S3 object key. + /// Cancellation token. + /// True if the object exists, false otherwise. + Task ExistsAsync(string bucket, string key, CancellationToken ct = default); +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ISourceDiscovery.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ISourceDiscovery.cs new file mode 100644 index 0000000..8c93042 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/ISourceDiscovery.cs @@ -0,0 +1,35 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using Oproto.Lambda.OpenApi.Merge; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +/// +/// Interface for discovering source OpenAPI specification files. +/// +public interface ISourceDiscovery +{ + /// + /// Discovers source files based on configuration. + /// + /// The S3 bucket name. + /// The API prefix (e.g., "publicapi/"). + /// The merge configuration. + /// Cancellation token. + /// A list of discovered source files. + Task> DiscoverSourcesAsync( + string bucket, + string prefix, + LambdaMergeConfig config, + CancellationToken ct = default); +} + +/// +/// Represents a discovered source file for merging. +/// +/// The full S3 key of the source file. +/// The friendly name for this source (filename without extension if not specified). +/// The explicit source configuration if provided, null for auto-discovered sources. +public record DiscoveredSource( + string Key, + string Name, + SourceConfiguration? ExplicitConfig); diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/MetricsService.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/MetricsService.cs new file mode 100644 index 0000000..4564e09 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/MetricsService.cs @@ -0,0 +1,176 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using Amazon.CloudWatch; +using Amazon.CloudWatch.Model; +using Microsoft.Extensions.Logging; + +/// +/// Service for emitting CloudWatch metrics for merge operations. +/// +public class MetricsService : IMetricsService +{ + private const string Namespace = "Oproto/OpenApiMerge"; + private const string DimensionName = "ApiPrefix"; + + private readonly IAmazonCloudWatch _cloudWatch; + private readonly ILogger _logger; + + /// + /// Initializes a new instance of the MetricsService. + /// + /// The CloudWatch client. + /// The logger. + public MetricsService(IAmazonCloudWatch cloudWatch, ILogger logger) + { + _cloudWatch = cloudWatch ?? throw new ArgumentNullException(nameof(cloudWatch)); + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + public async Task EmitSuccessMetricsAsync( + string prefix, + long durationMs, + int filesProcessed, + bool outputWritten, + CancellationToken ct = default) + { + var timestamp = DateTime.UtcNow; + var dimension = new Dimension + { + Name = DimensionName, + Value = NormalizePrefix(prefix) + }; + + var metricData = new List + { + // Merge duration + new MetricDatum + { + MetricName = "MergeDuration", + Value = durationMs, + Unit = StandardUnit.Milliseconds, + TimestampUtc = timestamp, + Dimensions = new List { dimension } + }, + // Success count + new MetricDatum + { + MetricName = "MergeSuccess", + Value = 1, + Unit = StandardUnit.Count, + TimestampUtc = timestamp, + Dimensions = new List { dimension } + }, + // Files processed + new MetricDatum + { + MetricName = "FilesProcessed", + Value = filesProcessed, + Unit = StandardUnit.Count, + TimestampUtc = timestamp, + Dimensions = new List { dimension } + }, + // Output written (1 if written, 0 if unchanged) + new MetricDatum + { + MetricName = "OutputWritten", + Value = outputWritten ? 1 : 0, + Unit = StandardUnit.Count, + TimestampUtc = timestamp, + Dimensions = new List { dimension } + } + }; + + await PutMetricsAsync(metricData, ct); + } + + /// + public async Task EmitFailureMetricsAsync( + string prefix, + long durationMs, + string errorType, + CancellationToken ct = default) + { + var timestamp = DateTime.UtcNow; + var prefixDimension = new Dimension + { + Name = DimensionName, + Value = NormalizePrefix(prefix) + }; + var errorDimension = new Dimension + { + Name = "ErrorType", + Value = errorType + }; + + var metricData = new List + { + // Merge duration (even for failures) + new MetricDatum + { + MetricName = "MergeDuration", + Value = durationMs, + Unit = StandardUnit.Milliseconds, + TimestampUtc = timestamp, + Dimensions = new List { prefixDimension } + }, + // Failure count + new MetricDatum + { + MetricName = "MergeFailure", + Value = 1, + Unit = StandardUnit.Count, + TimestampUtc = timestamp, + Dimensions = new List { prefixDimension } + }, + // Failure count by error type + new MetricDatum + { + MetricName = "MergeFailure", + Value = 1, + Unit = StandardUnit.Count, + TimestampUtc = timestamp, + Dimensions = new List { prefixDimension, errorDimension } + } + }; + + await PutMetricsAsync(metricData, ct); + } + + /// + /// Puts metrics to CloudWatch. + /// + private async Task PutMetricsAsync(List metricData, CancellationToken ct) + { + try + { + var request = new PutMetricDataRequest + { + Namespace = Namespace, + MetricData = metricData + }; + + await _cloudWatch.PutMetricDataAsync(request, ct); + _logger.LogDebug("Emitted {Count} metrics to CloudWatch", metricData.Count); + } + catch (Exception ex) + { + // Log but don't fail the operation if metrics emission fails + _logger.LogWarning(ex, "Failed to emit CloudWatch metrics"); + } + } + + /// + /// Normalizes the prefix for use as a dimension value. + /// + private static string NormalizePrefix(string prefix) + { + if (string.IsNullOrEmpty(prefix)) + { + return "root"; + } + + // Remove trailing slash for cleaner dimension values + return prefix.TrimEnd('/'); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/OutputComparer.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/OutputComparer.cs new file mode 100644 index 0000000..392bf2e --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/OutputComparer.cs @@ -0,0 +1,157 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using System.Text.Json; +using System.Text.Json.Nodes; +using Microsoft.Extensions.Logging; + +/// +/// Service for comparing OpenAPI document outputs with JSON normalization. +/// Implements semantic comparison that ignores formatting differences. +/// +public class OutputComparer : IOutputComparer +{ + private readonly ILogger _logger; + + private static readonly JsonSerializerOptions NormalizedJsonOptions = new() + { + WriteIndented = true, + PropertyNamingPolicy = null, // Preserve original property names + Encoder = System.Text.Encodings.Web.JavaScriptEncoder.UnsafeRelaxedJsonEscaping + }; + + /// + /// Initializes a new instance of the OutputComparer. + /// + /// The logger. + public OutputComparer(ILogger logger) + { + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + public string NormalizeJson(string json) + { + if (string.IsNullOrWhiteSpace(json)) + { + return string.Empty; + } + + try + { + var node = JsonNode.Parse(json); + if (node == null) + { + return string.Empty; + } + + var sortedNode = SortJsonNode(node); + return sortedNode?.ToJsonString(NormalizedJsonOptions) ?? string.Empty; + } + catch (JsonException ex) + { + _logger.LogWarning(ex, "Failed to parse JSON for normalization, returning original"); + return json; + } + } + + /// + public bool AreEquivalent(string json1, string json2) + { + // Handle null/empty cases + if (string.IsNullOrWhiteSpace(json1) && string.IsNullOrWhiteSpace(json2)) + { + return true; + } + + if (string.IsNullOrWhiteSpace(json1) || string.IsNullOrWhiteSpace(json2)) + { + return false; + } + + try + { + var normalized1 = NormalizeJson(json1); + var normalized2 = NormalizeJson(json2); + + var areEqual = string.Equals(normalized1, normalized2, StringComparison.Ordinal); + + if (!areEqual) + { + _logger.LogDebug("JSON documents differ after normalization"); + } + + return areEqual; + } + catch (Exception ex) + { + _logger.LogWarning(ex, "Error comparing JSON documents, treating as different"); + return false; + } + } + + /// + /// Recursively sorts a JSON node's properties alphabetically. + /// + /// The JSON node to sort. + /// A new sorted JSON node. + private static JsonNode? SortJsonNode(JsonNode? node) + { + if (node == null) + { + return null; + } + + switch (node) + { + case JsonObject obj: + return SortJsonObject(obj); + + case JsonArray arr: + return SortJsonArray(arr); + + default: + // Value nodes (string, number, bool, null) - return a copy + return JsonNode.Parse(node.ToJsonString()); + } + } + + /// + /// Sorts a JSON object's properties alphabetically. + /// + /// The JSON object to sort. + /// A new sorted JSON object. + private static JsonObject SortJsonObject(JsonObject obj) + { + var sortedObj = new JsonObject(); + + // Get all properties sorted alphabetically + var sortedProperties = obj + .OrderBy(kvp => kvp.Key, StringComparer.Ordinal) + .ToList(); + + foreach (var kvp in sortedProperties) + { + sortedObj[kvp.Key] = SortJsonNode(kvp.Value); + } + + return sortedObj; + } + + /// + /// Processes a JSON array, sorting any nested objects. + /// Note: Array element order is preserved as it may be semantically significant. + /// + /// The JSON array to process. + /// A new JSON array with sorted nested objects. + private static JsonArray SortJsonArray(JsonArray arr) + { + var sortedArr = new JsonArray(); + + foreach (var item in arr) + { + sortedArr.Add(SortJsonNode(item)); + } + + return sortedArr; + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/S3Service.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/S3Service.cs new file mode 100644 index 0000000..d34ffc5 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/S3Service.cs @@ -0,0 +1,174 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using System.Net; +using System.Text; +using System.Text.Json; +using Amazon.S3; +using Amazon.S3.Model; +using Microsoft.Extensions.Logging; + +/// +/// S3 service implementation for reading and writing objects. +/// +public class S3Service : IS3Service +{ + private readonly IAmazonS3 _s3Client; + private readonly ILogger _logger; + + private static readonly JsonSerializerOptions JsonOptions = new() + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + WriteIndented = true + }; + + /// + /// Initializes a new instance of the S3Service. + /// + /// The S3 client. + /// The logger. + public S3Service(IAmazonS3 s3Client, ILogger logger) + { + _s3Client = s3Client ?? throw new ArgumentNullException(nameof(s3Client)); + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + public async Task ReadJsonAsync(string bucket, string key, CancellationToken ct = default) where T : class + { + var content = await ReadTextAsync(bucket, key, ct); + if (content == null) + { + return null; + } + + try + { + return JsonSerializer.Deserialize(content, JsonOptions); + } + catch (JsonException ex) + { + _logger.LogError(ex, "Failed to deserialize JSON from s3://{Bucket}/{Key}", bucket, key); + throw; + } + } + + + /// + public async Task ReadTextAsync(string bucket, string key, CancellationToken ct = default) + { + _logger.LogDebug("Reading s3://{Bucket}/{Key}", bucket, key); + + try + { + var request = new GetObjectRequest + { + BucketName = bucket, + Key = key + }; + + using var response = await _s3Client.GetObjectAsync(request, ct); + using var reader = new StreamReader(response.ResponseStream, Encoding.UTF8); + var content = await reader.ReadToEndAsync(ct); + + _logger.LogDebug("Successfully read {Length} bytes from s3://{Bucket}/{Key}", content.Length, bucket, key); + return content; + } + catch (AmazonS3Exception ex) when (ex.StatusCode == HttpStatusCode.NotFound) + { + _logger.LogDebug("Object not found: s3://{Bucket}/{Key}", bucket, key); + return null; + } + } + + /// + public async Task WriteJsonAsync(string bucket, string key, T content, CancellationToken ct = default) + { + _logger.LogDebug("Writing JSON to s3://{Bucket}/{Key}", bucket, key); + + var json = JsonSerializer.Serialize(content, JsonOptions); + + var request = new PutObjectRequest + { + BucketName = bucket, + Key = key, + ContentBody = json, + ContentType = "application/json" + }; + + await _s3Client.PutObjectAsync(request, ct); + _logger.LogDebug("Successfully wrote {Length} bytes to s3://{Bucket}/{Key}", json.Length, bucket, key); + } + + /// + public async Task WriteTextAsync(string bucket, string key, string content, string contentType = "application/json", CancellationToken ct = default) + { + _logger.LogDebug("Writing text to s3://{Bucket}/{Key}", bucket, key); + + var request = new PutObjectRequest + { + BucketName = bucket, + Key = key, + ContentBody = content, + ContentType = contentType + }; + + await _s3Client.PutObjectAsync(request, ct); + _logger.LogDebug("Successfully wrote {Length} bytes to s3://{Bucket}/{Key}", content.Length, bucket, key); + } + + /// + public async Task> ListObjectsAsync(string bucket, string prefix, CancellationToken ct = default) + { + _logger.LogDebug("Listing objects in s3://{Bucket}/{Prefix}", bucket, prefix); + + var keys = new List(); + string? continuationToken = null; + + do + { + var request = new ListObjectsV2Request + { + BucketName = bucket, + Prefix = prefix, + ContinuationToken = continuationToken + }; + + var response = await _s3Client.ListObjectsV2Async(request, ct); + + foreach (var obj in response.S3Objects) + { + keys.Add(obj.Key); + } + + continuationToken = response.IsTruncated ? response.NextContinuationToken : null; + } + while (continuationToken != null); + + _logger.LogDebug("Found {Count} objects in s3://{Bucket}/{Prefix}", keys.Count, bucket, prefix); + return keys; + } + + /// + public async Task ExistsAsync(string bucket, string key, CancellationToken ct = default) + { + _logger.LogDebug("Checking existence of s3://{Bucket}/{Key}", bucket, key); + + try + { + var request = new GetObjectMetadataRequest + { + BucketName = bucket, + Key = key + }; + + await _s3Client.GetObjectMetadataAsync(request, ct); + _logger.LogDebug("Object exists: s3://{Bucket}/{Key}", bucket, key); + return true; + } + catch (AmazonS3Exception ex) when (ex.StatusCode == HttpStatusCode.NotFound) + { + _logger.LogDebug("Object does not exist: s3://{Bucket}/{Key}", bucket, key); + return false; + } + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Services/SourceDiscovery.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/SourceDiscovery.cs new file mode 100644 index 0000000..4d50c78 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Services/SourceDiscovery.cs @@ -0,0 +1,271 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +using Microsoft.Extensions.Logging; +using Oproto.Lambda.OpenApi.Merge; +using Oproto.Lambda.OpenApi.Merge.Lambda.Models; + +/// +/// Discovers source OpenAPI specification files from S3. +/// +public class SourceDiscovery : ISourceDiscovery +{ + private const string ConfigFileName = "config.json"; + + private readonly IS3Service _s3Service; + private readonly ILogger _logger; + + /// + /// Initializes a new instance of the SourceDiscovery. + /// + /// The S3 service for listing files. + /// The logger. + public SourceDiscovery(IS3Service s3Service, ILogger logger) + { + _s3Service = s3Service ?? throw new ArgumentNullException(nameof(s3Service)); + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + public async Task> DiscoverSourcesAsync( + string bucket, + string prefix, + LambdaMergeConfig config, + CancellationToken ct = default) + { + if (config.AutoDiscover) + { + return await DiscoverSourcesAutoAsync(bucket, prefix, config, ct); + } + + return DiscoverSourcesExplicit(prefix, config); + } + + + /// + /// Discovers sources using auto-discovery mode (list and filter files). + /// + private async Task> DiscoverSourcesAutoAsync( + string bucket, + string prefix, + LambdaMergeConfig config, + CancellationToken ct) + { + _logger.LogInformation("Auto-discovering source files in s3://{Bucket}/{Prefix}", bucket, prefix); + + // List all objects in the prefix + var allKeys = await _s3Service.ListObjectsAsync(bucket, prefix, ct); + + // Filter to only JSON files within the immediate prefix (not nested) + var jsonFiles = allKeys + .Where(key => IsJsonFile(key)) + .Where(key => IsInImmediatePrefix(key, prefix)) + .ToList(); + + _logger.LogDebug("Found {Count} JSON files in prefix", jsonFiles.Count); + + // Build the output file key for exclusion + var outputKey = BuildOutputKey(prefix, config.Output); + + // Filter out config.json, output file, and excluded patterns + var sources = new List(); + foreach (var key in jsonFiles) + { + var filename = GetFilename(key); + + // Exclude config.json + if (filename.Equals(ConfigFileName, StringComparison.OrdinalIgnoreCase)) + { + _logger.LogDebug("Excluding config file: {Key}", key); + continue; + } + + // Exclude output file + if (key.Equals(outputKey, StringComparison.OrdinalIgnoreCase)) + { + _logger.LogDebug("Excluding output file: {Key}", key); + continue; + } + + // Check exclude patterns + if (MatchesExcludePattern(filename, config.ExcludePatterns)) + { + _logger.LogDebug("Excluding file matching exclude pattern: {Key}", key); + continue; + } + + // Add as discovered source + var name = GetNameFromFilename(filename); + sources.Add(new DiscoveredSource(key, name, null)); + } + + _logger.LogInformation("Auto-discovered {Count} source files", sources.Count); + return sources; + } + + /// + /// Discovers sources using explicit sources mode. + /// + private IReadOnlyList DiscoverSourcesExplicit( + string prefix, + LambdaMergeConfig config) + { + _logger.LogInformation("Using explicit sources from configuration"); + + var sources = new List(); + + foreach (var sourceConfig in config.Sources) + { + var key = BuildSourceKey(prefix, sourceConfig.Path); + var name = sourceConfig.Name ?? GetNameFromFilename(GetFilename(key)); + + sources.Add(new DiscoveredSource(key, name, sourceConfig)); + } + + _logger.LogInformation("Found {Count} explicit source files", sources.Count); + return sources; + } + + + /// + /// Checks if a key represents a JSON file. + /// + private static bool IsJsonFile(string key) + { + return key.EndsWith(".json", StringComparison.OrdinalIgnoreCase); + } + + /// + /// Checks if a key is in the immediate prefix (not in a subdirectory). + /// + private static bool IsInImmediatePrefix(string key, string prefix) + { + // Remove the prefix from the key + var relativePath = key; + if (!string.IsNullOrEmpty(prefix)) + { + if (!key.StartsWith(prefix, StringComparison.OrdinalIgnoreCase)) + { + return false; + } + relativePath = key.Substring(prefix.Length); + } + + // If there's a slash in the relative path, it's in a subdirectory + return !relativePath.Contains('/'); + } + + /// + /// Gets the filename from a full S3 key. + /// + private static string GetFilename(string key) + { + var lastSlash = key.LastIndexOf('/'); + return lastSlash >= 0 ? key.Substring(lastSlash + 1) : key; + } + + /// + /// Gets a friendly name from a filename (removes .json extension). + /// + private static string GetNameFromFilename(string filename) + { + if (filename.EndsWith(".json", StringComparison.OrdinalIgnoreCase)) + { + return filename.Substring(0, filename.Length - 5); + } + return filename; + } + + /// + /// Builds the full S3 key for the output file. + /// + private static string BuildOutputKey(string prefix, string output) + { + // If output starts with '/', treat it as absolute (remove leading slash for S3) + if (output.StartsWith("/")) + { + return output.TrimStart('/'); + } + + // If output contains '/', treat it as a full path (not relative to prefix) + if (output.Contains("/")) + { + return output; + } + + // Otherwise, it's a simple filename relative to the prefix + if (string.IsNullOrEmpty(prefix)) + { + return output; + } + + // Ensure prefix ends with / + if (!prefix.EndsWith("/")) + { + prefix += "/"; + } + + return prefix + output; + } + + /// + /// Builds the full S3 key for a source file. + /// + private static string BuildSourceKey(string prefix, string path) + { + if (string.IsNullOrEmpty(prefix)) + { + return path; + } + + // Ensure prefix ends with / + if (!prefix.EndsWith("/")) + { + prefix += "/"; + } + + return prefix + path; + } + + /// + /// Checks if a filename matches any of the exclude patterns. + /// + private static bool MatchesExcludePattern(string filename, IReadOnlyList excludePatterns) + { + if (excludePatterns == null || excludePatterns.Count == 0) + { + return false; + } + + foreach (var pattern in excludePatterns) + { + if (MatchesGlobPattern(filename, pattern)) + { + return true; + } + } + + return false; + } + + /// + /// Matches a filename against a simple glob pattern. + /// Supports * (any characters) and ? (single character). + /// + private static bool MatchesGlobPattern(string filename, string pattern) + { + if (string.IsNullOrEmpty(pattern)) + { + return false; + } + + // Convert glob pattern to regex + var regexPattern = "^" + System.Text.RegularExpressions.Regex.Escape(pattern) + .Replace("\\*", ".*") + .Replace("\\?", ".") + "$"; + + return System.Text.RegularExpressions.Regex.IsMatch( + filename, + regexPattern, + System.Text.RegularExpressions.RegexOptions.IgnoreCase); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/Startup.cs b/Oproto.Lambda.OpenApi.Merge.Lambda/Startup.cs new file mode 100644 index 0000000..a333d00 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/Startup.cs @@ -0,0 +1,42 @@ +namespace Oproto.Lambda.OpenApi.Merge.Lambda; + +using Amazon.CloudWatch; +using Amazon.Lambda.Annotations; +using Amazon.S3; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Logging; +using Oproto.Lambda.OpenApi.Merge.Lambda.Functions; +using Oproto.Lambda.OpenApi.Merge.Lambda.Services; + +/// +/// Startup class for configuring dependency injection in the Lambda function. +/// +[LambdaStartup] +public class Startup +{ + /// + /// Configures services for dependency injection. + /// + /// The service collection. + public void ConfigureServices(IServiceCollection services) + { + // Register AWS services + services.AddSingleton(); + services.AddSingleton(); + + // Register application services + services.AddSingleton(); + services.AddSingleton(); + services.AddSingleton(); + services.AddSingleton(); + services.AddSingleton(); + services.AddSingleton(); + + // Register logging + services.AddLogging(builder => + { + builder.AddLambdaLogger(); + builder.SetMinimumLevel(LogLevel.Information); + }); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Lambda/serverless.template b/Oproto.Lambda.OpenApi.Merge.Lambda/serverless.template new file mode 100644 index 0000000..84d5ce6 --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Lambda/serverless.template @@ -0,0 +1,24 @@ +{ + "AWSTemplateFormatVersion": "2010-09-09", + "Transform": "AWS::Serverless-2016-10-31", + "Description": "This template is partially managed by Amazon.Lambda.Annotations (v1.6.1.0).", + "Resources": { + "Merge": { + "Type": "AWS::Serverless::Function", + "Metadata": { + "Tool": "Amazon.Lambda.Annotations" + }, + "Properties": { + "Runtime": "dotnet8", + "CodeUri": ".", + "MemorySize": 512, + "Timeout": 30, + "Policies": [ + "AWSLambdaBasicExecutionRole" + ], + "PackageType": "Zip", + "Handler": "Oproto.Lambda.OpenApi.Merge.Lambda::Oproto.Lambda.OpenApi.Merge.Lambda.Functions.MergeFunction_Merge_Generated::Merge" + } + } + } +} \ No newline at end of file diff --git a/Oproto.Lambda.OpenApi.Merge.Tests/ConfigCompatibilityPropertyTests.cs b/Oproto.Lambda.OpenApi.Merge.Tests/ConfigCompatibilityPropertyTests.cs new file mode 100644 index 0000000..96bdb7f --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Tests/ConfigCompatibilityPropertyTests.cs @@ -0,0 +1,329 @@ +using FsCheck; +using FsCheck.Xunit; +using Oproto.Lambda.OpenApi.Merge; +using System.Text.Json; + +namespace Oproto.Lambda.OpenApi.Merge.Tests; + +/// +/// Property-based tests for MergeConfiguration compatibility and round-trip serialization. +/// +public class ConfigCompatibilityPropertyTests +{ + /// + /// Generators for MergeConfiguration test data. + /// + private static class ConfigGenerators + { + public static Gen TitleGen() + { + return Gen.Elements("My API", "Test API", "Public API", "Internal API", "Service API", "Gateway API"); + } + + public static Gen VersionGen() + { + return Gen.Elements("1.0.0", "2.0.0", "1.0.0-beta", "3.1.0", "0.1.0", "1.2.3"); + } + + public static Gen DescriptionGen() + { + return Gen.OneOf( + Gen.Constant(null), + Gen.Elements("API description", "Test description", "Merged API specification", "Service endpoints") + ); + } + + public static Gen ServerUrlGen() + { + return Gen.Elements( + "https://api.example.com/v1", + "https://staging.example.com/v1", + "https://dev.example.com/v1", + "https://api.production.com", + "https://localhost:5000"); + } + + public static Gen ServerDescriptionGen() + { + return Gen.OneOf( + Gen.Constant(null), + Gen.Elements("Production", "Staging", "Development", "Local") + ); + } + + public static Gen SourcePathGen() + { + return Gen.Elements( + "./api1.json", "./api2.json", "./users-service.json", + "./orders-service.json", "./products.json", "service.json"); + } + + public static Gen PathPrefixGen() + { + return Gen.OneOf( + Gen.Constant(null), + Gen.Elements("/v1", "/v2", "/api", "/users", "/orders", "/products") + ); + } + + public static Gen OperationIdPrefixGen() + { + return Gen.OneOf( + Gen.Constant(null), + Gen.Elements("api1_", "api2_", "users_", "orders_", "products_") + ); + } + + public static Gen SourceNameGen() + { + return Gen.OneOf( + Gen.Constant(null), + Gen.Elements("Users", "Orders", "Products", "API1", "API2", "Service") + ); + } + + public static Gen OutputGen() + { + return Gen.Elements( + "merged-openapi.json", "output.json", "api.json", + "merged.json", "combined-api.json", "openapi.json"); + } + + public static Gen ExcludePatternGen() + { + return Gen.Elements( + "*-draft.json", "*.backup.json", "temp-*.json", + "*-old.json", "*.bak", "draft/*.json"); + } + + public static Gen ServerConfigGen() + { + return from url in ServerUrlGen() + from description in ServerDescriptionGen() + select new MergeServerConfiguration + { + Url = url, + Description = description + }; + } + + public static Gen SourceConfigGen() + { + return from path in SourcePathGen() + from pathPrefix in PathPrefixGen() + from operationIdPrefix in OperationIdPrefixGen() + from name in SourceNameGen() + select new SourceConfiguration + { + Path = path, + PathPrefix = pathPrefix, + OperationIdPrefix = operationIdPrefix, + Name = name + }; + } + + public static Gen InfoConfigGen() + { + return from title in TitleGen() + from version in VersionGen() + from description in DescriptionGen() + select new MergeInfoConfiguration + { + Title = title, + Version = version, + Description = description + }; + } + + public static Gen MergeConfigGen() + { + return from info in InfoConfigGen() + from serverCount in Gen.Choose(0, 3) + from servers in Gen.ListOf(serverCount, ServerConfigGen()) + from sourceCount in Gen.Choose(1, 4) + from sources in Gen.ListOf(sourceCount, SourceConfigGen()) + from output in OutputGen() + from schemaConflict in Gen.Elements( + SchemaConflictStrategy.Rename, + SchemaConflictStrategy.FirstWins, + SchemaConflictStrategy.Fail) + from autoDiscover in Gen.Elements(true, false) + from excludePatternCount in Gen.Choose(0, 3) + from excludePatterns in Gen.ListOf(excludePatternCount, ExcludePatternGen()) + select new MergeConfiguration + { + Info = info, + Servers = servers.ToList(), + Sources = sources.ToList(), + Output = output, + SchemaConflict = schemaConflict, + AutoDiscover = autoDiscover, + ExcludePatterns = excludePatterns.Distinct().ToList() + }; + } + + /// + /// Generates a MergeConfiguration without the new AutoDiscover and ExcludePatterns properties + /// to test backwards compatibility. + /// + public static Gen LegacyMergeConfigGen() + { + return from info in InfoConfigGen() + from serverCount in Gen.Choose(0, 3) + from servers in Gen.ListOf(serverCount, ServerConfigGen()) + from sourceCount in Gen.Choose(1, 4) + from sources in Gen.ListOf(sourceCount, SourceConfigGen()) + from output in OutputGen() + from schemaConflict in Gen.Elements( + SchemaConflictStrategy.Rename, + SchemaConflictStrategy.FirstWins, + SchemaConflictStrategy.Fail) + select new MergeConfiguration + { + Info = info, + Servers = servers.ToList(), + Sources = sources.ToList(), + Output = output, + SchemaConflict = schemaConflict + // AutoDiscover and ExcludePatterns use defaults + }; + } + } + + /// + /// Feature: lambda-merge-tool, Property 2: Config Compatibility Round-Trip + /// For any valid MergeConfiguration object, serializing it to JSON and deserializing it + /// SHALL preserve all original property values. + /// **Validates: Requirements 2.3, 3.1** + /// + [Property(MaxTest = 100)] + public Property MergeConfiguration_RoundTrip_PreservesAllProperties() + { + return Prop.ForAll( + ConfigGenerators.MergeConfigGen().ToArbitrary(), + config => + { + // Serialize to JSON + var json = JsonSerializer.Serialize(config); + + // Deserialize back + var deserialized = JsonSerializer.Deserialize(json); + + // Verify all properties are preserved + var infoPreserved = deserialized!.Info.Title == config.Info.Title + && deserialized.Info.Version == config.Info.Version + && deserialized.Info.Description == config.Info.Description; + + var serversPreserved = deserialized.Servers.Count == config.Servers.Count + && deserialized.Servers.Zip(config.Servers, (d, c) => + d.Url == c.Url && d.Description == c.Description).All(x => x); + + var sourcesPreserved = deserialized.Sources.Count == config.Sources.Count + && deserialized.Sources.Zip(config.Sources, (d, c) => + d.Path == c.Path + && d.PathPrefix == c.PathPrefix + && d.OperationIdPrefix == c.OperationIdPrefix + && d.Name == c.Name).All(x => x); + + var outputPreserved = deserialized.Output == config.Output; + var schemaConflictPreserved = deserialized.SchemaConflict == config.SchemaConflict; + var autoDiscoverPreserved = deserialized.AutoDiscover == config.AutoDiscover; + var excludePatternsPreserved = deserialized.ExcludePatterns.Count == config.ExcludePatterns.Count + && deserialized.ExcludePatterns.SequenceEqual(config.ExcludePatterns); + + return infoPreserved.Label("Info should be preserved") + .And(serversPreserved).Label("Servers should be preserved") + .And(sourcesPreserved).Label("Sources should be preserved") + .And(outputPreserved).Label("Output should be preserved") + .And(schemaConflictPreserved).Label("SchemaConflict should be preserved") + .And(autoDiscoverPreserved).Label("AutoDiscover should be preserved") + .And(excludePatternsPreserved).Label("ExcludePatterns should be preserved"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 2: Config Compatibility Round-Trip + /// For any legacy MergeConfiguration (without AutoDiscover/ExcludePatterns), deserializing + /// SHALL have sensible defaults for the new properties (AutoDiscover = false, ExcludePatterns = empty). + /// **Validates: Requirements 2.3, 3.1** + /// + [Property(MaxTest = 100)] + public Property LegacyConfig_Deserialized_HasSensibleDefaults() + { + return Prop.ForAll( + ConfigGenerators.LegacyMergeConfigGen().ToArbitrary(), + config => + { + // Serialize to JSON (will include autoDiscover: false and excludePatterns: []) + var json = JsonSerializer.Serialize(config); + + // Simulate legacy JSON by removing the new properties + var jsonDoc = System.Text.Json.JsonDocument.Parse(json); + var legacyJson = CreateLegacyJson(jsonDoc); + + // Deserialize the legacy JSON + var deserialized = JsonSerializer.Deserialize(legacyJson); + + // Verify defaults are applied + var autoDiscoverDefault = deserialized!.AutoDiscover == false; + var excludePatternsDefault = deserialized.ExcludePatterns != null + && deserialized.ExcludePatterns.Count == 0; + + // Verify original properties are preserved + var infoPreserved = deserialized.Info.Title == config.Info.Title + && deserialized.Info.Version == config.Info.Version; + var outputPreserved = deserialized.Output == config.Output; + var schemaConflictPreserved = deserialized.SchemaConflict == config.SchemaConflict; + + return autoDiscoverDefault.Label("AutoDiscover should default to false") + .And(excludePatternsDefault).Label("ExcludePatterns should default to empty list") + .And(infoPreserved).Label("Info should be preserved") + .And(outputPreserved).Label("Output should be preserved") + .And(schemaConflictPreserved).Label("SchemaConflict should be preserved"); + }); + } + + /// + /// Feature: lambda-merge-tool, Property 2: Config Compatibility Round-Trip + /// For any MergeConfiguration, serialization SHALL be deterministic (same input produces same output). + /// **Validates: Requirements 2.3, 3.1** + /// + [Property(MaxTest = 100)] + public Property MergeConfiguration_Serialization_IsDeterministic() + { + return Prop.ForAll( + ConfigGenerators.MergeConfigGen().ToArbitrary(), + config => + { + // Serialize twice + var json1 = JsonSerializer.Serialize(config); + var json2 = JsonSerializer.Serialize(config); + + // Both serializations should produce identical output + return (json1 == json2).Label("Serialization should be deterministic"); + }); + } + + /// + /// Creates a legacy JSON string by removing autoDiscover and excludePatterns properties. + /// + private static string CreateLegacyJson(System.Text.Json.JsonDocument jsonDoc) + { + var options = new JsonWriterOptions { Indented = false }; + using var stream = new System.IO.MemoryStream(); + using (var writer = new Utf8JsonWriter(stream, options)) + { + writer.WriteStartObject(); + foreach (var property in jsonDoc.RootElement.EnumerateObject()) + { + // Skip the new properties to simulate legacy JSON + if (property.Name == "autoDiscover" || property.Name == "excludePatterns") + continue; + + property.WriteTo(writer); + } + writer.WriteEndObject(); + } + return System.Text.Encoding.UTF8.GetString(stream.ToArray()); + } +} diff --git a/Oproto.Lambda.OpenApi.Merge.Tests/MergeCommandAutoDiscoverTests.cs b/Oproto.Lambda.OpenApi.Merge.Tests/MergeCommandAutoDiscoverTests.cs new file mode 100644 index 0000000..0981d0d --- /dev/null +++ b/Oproto.Lambda.OpenApi.Merge.Tests/MergeCommandAutoDiscoverTests.cs @@ -0,0 +1,124 @@ +namespace Oproto.Lambda.OpenApi.Merge.Tests; + +using Oproto.Lambda.OpenApi.Merge.Tool.Commands; +using Xunit; + +/// +/// Unit tests for CLI auto-discover functionality. +/// Validates: Requirements 3.2, 3.9 +/// +public class MergeCommandAutoDiscoverTests +{ + #region MatchesGlobPattern Tests + + [Theory] + [InlineData("test.json", "*.json", true)] + [InlineData("test.json", "*.txt", false)] + [InlineData("api-draft.json", "*-draft.json", true)] + [InlineData("api-final.json", "*-draft.json", false)] + [InlineData("backup.api.json", "*.backup.json", false)] + [InlineData("api.backup.json", "*.backup.json", true)] + [InlineData("test.json", "test.json", true)] + [InlineData("other.json", "test.json", false)] + [InlineData("a.json", "?.json", true)] + [InlineData("ab.json", "?.json", false)] + [InlineData("TEST.JSON", "*.json", true)] // Case insensitive + [InlineData("Api-Draft.json", "*-draft.json", true)] // Case insensitive + public void MatchesGlobPattern_VariousPatterns_ReturnsExpectedResult( + string fileName, string pattern, bool expected) + { + // Act + var result = MergeCommand.MatchesGlobPattern(fileName, pattern); + + // Assert + Assert.Equal(expected, result); + } + + [Fact] + public void MatchesGlobPattern_EmptyPattern_MatchesEmptyFileName() + { + // Act & Assert + Assert.True(MergeCommand.MatchesGlobPattern("", "")); + Assert.False(MergeCommand.MatchesGlobPattern("test.json", "")); + } + + [Fact] + public void MatchesGlobPattern_WildcardOnly_MatchesAnyFileName() + { + // Act & Assert + Assert.True(MergeCommand.MatchesGlobPattern("anything.json", "*")); + Assert.True(MergeCommand.MatchesGlobPattern("", "*")); + Assert.True(MergeCommand.MatchesGlobPattern("complex-file-name.backup.json", "*")); + } + + #endregion + + #region MatchesExcludePattern Tests + + [Fact] + public void MatchesExcludePattern_EmptyPatternList_ReturnsFalse() + { + // Arrange + var patterns = new List(); + + // Act + var result = MergeCommand.MatchesExcludePattern("test.json", patterns); + + // Assert + Assert.False(result); + } + + [Fact] + public void MatchesExcludePattern_SingleMatchingPattern_ReturnsTrue() + { + // Arrange + var patterns = new List { "*-draft.json" }; + + // Act + var result = MergeCommand.MatchesExcludePattern("api-draft.json", patterns); + + // Assert + Assert.True(result); + } + + [Fact] + public void MatchesExcludePattern_SingleNonMatchingPattern_ReturnsFalse() + { + // Arrange + var patterns = new List { "*-draft.json" }; + + // Act + var result = MergeCommand.MatchesExcludePattern("api-final.json", patterns); + + // Assert + Assert.False(result); + } + + [Fact] + public void MatchesExcludePattern_MultiplePatterns_MatchesAny() + { + // Arrange + var patterns = new List { "*-draft.json", "*.backup.json", "temp-*" }; + + // Act & Assert + Assert.True(MergeCommand.MatchesExcludePattern("api-draft.json", patterns)); + Assert.True(MergeCommand.MatchesExcludePattern("old.backup.json", patterns)); + Assert.True(MergeCommand.MatchesExcludePattern("temp-file.json", patterns)); + Assert.False(MergeCommand.MatchesExcludePattern("api-final.json", patterns)); + } + + [Fact] + public void MatchesExcludePattern_CaseInsensitive_MatchesRegardlessOfCase() + { + // Arrange + var patterns = new List { "*-DRAFT.json" }; + + // Act + var result = MergeCommand.MatchesExcludePattern("api-draft.json", patterns); + + // Assert + Assert.True(result); + } + + #endregion +} diff --git a/Oproto.Lambda.OpenApi.Merge.Tests/MergeConfigurationTests.cs b/Oproto.Lambda.OpenApi.Merge.Tests/MergeConfigurationTests.cs index 0320106..615bee8 100644 --- a/Oproto.Lambda.OpenApi.Merge.Tests/MergeConfigurationTests.cs +++ b/Oproto.Lambda.OpenApi.Merge.Tests/MergeConfigurationTests.cs @@ -20,6 +20,9 @@ public void MergeConfiguration_DefaultValues_AreCorrect() Assert.Empty(config.Sources); Assert.Equal("merged-openapi.json", config.Output); Assert.Equal(SchemaConflictStrategy.Rename, config.SchemaConflict); + Assert.False(config.AutoDiscover); + Assert.NotNull(config.ExcludePatterns); + Assert.Empty(config.ExcludePatterns); } [Fact] @@ -200,4 +203,125 @@ public void MergeServerConfiguration_DefaultValues_AreCorrect() Assert.Equal(string.Empty, server.Url); Assert.Null(server.Description); } + + [Fact] + public void MergeConfiguration_Deserialize_AutoDiscoverTrue() + { + var json = """ + { + "info": { + "title": "Auto API", + "version": "1.0.0" + }, + "autoDiscover": true, + "excludePatterns": ["*-draft.json", "*.backup.json"], + "output": "merged.json" + } + """; + + var config = JsonSerializer.Deserialize(json); + + Assert.NotNull(config); + Assert.True(config.AutoDiscover); + Assert.Equal(2, config.ExcludePatterns.Count); + Assert.Contains("*-draft.json", config.ExcludePatterns); + Assert.Contains("*.backup.json", config.ExcludePatterns); + } + + [Fact] + public void MergeConfiguration_Deserialize_AutoDiscoverFalse_WithExplicitSources() + { + var json = """ + { + "info": { + "title": "Explicit API", + "version": "1.0.0" + }, + "autoDiscover": false, + "sources": [ + { "path": "./api1.json" }, + { "path": "./api2.json" } + ], + "output": "merged.json" + } + """; + + var config = JsonSerializer.Deserialize(json); + + Assert.NotNull(config); + Assert.False(config.AutoDiscover); + Assert.Equal(2, config.Sources.Count); + Assert.Empty(config.ExcludePatterns); + } + + [Fact] + public void MergeConfiguration_Deserialize_MissingAutoDiscover_DefaultsFalse() + { + var json = """ + { + "info": { + "title": "Default API", + "version": "1.0.0" + }, + "sources": [ + { "path": "./api.json" } + ] + } + """; + + var config = JsonSerializer.Deserialize(json); + + Assert.NotNull(config); + Assert.False(config.AutoDiscover); + Assert.Empty(config.ExcludePatterns); + } + + [Fact] + public void MergeConfiguration_Serialize_AutoDiscoverAndExcludePatterns() + { + var config = new MergeConfiguration + { + Info = new MergeInfoConfiguration + { + Title = "Test API", + Version = "1.0.0" + }, + AutoDiscover = true, + ExcludePatterns = new List { "*.draft.json", "temp-*.json" }, + Output = "output.json" + }; + + var json = JsonSerializer.Serialize(config); + + Assert.Contains("\"autoDiscover\":true", json); + Assert.Contains("\"excludePatterns\"", json); + Assert.Contains("*.draft.json", json); + Assert.Contains("temp-*.json", json); + } + + [Fact] + public void MergeConfiguration_RoundTrip_WithAutoDiscover() + { + var config = new MergeConfiguration + { + Info = new MergeInfoConfiguration + { + Title = "Round Trip API", + Version = "2.0.0" + }, + AutoDiscover = true, + ExcludePatterns = new List { "*-backup.json", "draft/*.json" }, + Output = "merged.json", + SchemaConflict = SchemaConflictStrategy.Rename + }; + + var json = JsonSerializer.Serialize(config); + var deserialized = JsonSerializer.Deserialize(json); + + Assert.NotNull(deserialized); + Assert.Equal(config.AutoDiscover, deserialized.AutoDiscover); + Assert.Equal(config.ExcludePatterns.Count, deserialized.ExcludePatterns.Count); + Assert.Equal(config.ExcludePatterns[0], deserialized.ExcludePatterns[0]); + Assert.Equal(config.ExcludePatterns[1], deserialized.ExcludePatterns[1]); + } } diff --git a/Oproto.Lambda.OpenApi.Merge.Tests/Oproto.Lambda.OpenApi.Merge.Tests.csproj b/Oproto.Lambda.OpenApi.Merge.Tests/Oproto.Lambda.OpenApi.Merge.Tests.csproj index ae05a9c..f289789 100644 --- a/Oproto.Lambda.OpenApi.Merge.Tests/Oproto.Lambda.OpenApi.Merge.Tests.csproj +++ b/Oproto.Lambda.OpenApi.Merge.Tests/Oproto.Lambda.OpenApi.Merge.Tests.csproj @@ -28,6 +28,7 @@ + diff --git a/Oproto.Lambda.OpenApi.Merge.Tool/Commands/MergeCommand.cs b/Oproto.Lambda.OpenApi.Merge.Tool/Commands/MergeCommand.cs index 20a4f27..f0591f5 100644 --- a/Oproto.Lambda.OpenApi.Merge.Tool/Commands/MergeCommand.cs +++ b/Oproto.Lambda.OpenApi.Merge.Tool/Commands/MergeCommand.cs @@ -2,6 +2,7 @@ namespace Oproto.Lambda.OpenApi.Merge.Tool.Commands; using System.CommandLine; using System.Text.Json; +using System.Text.RegularExpressions; using Microsoft.OpenApi; using Microsoft.OpenApi.Extensions; using Microsoft.OpenApi.Models; @@ -222,20 +223,41 @@ private static async Task LoadConfigurationAsync(FileInfo co throw new ConfigurationException("Failed to deserialize configuration file."); } - // Validate required fields - ValidateConfiguration(config); - // Resolve relative paths based on config file location var configDir = configFile.DirectoryName ?? "."; - foreach (var source in config.Sources) + + // Handle auto-discover mode + if (config.AutoDiscover) { - // First expand tilde paths - source.Path = PathExpander.ExpandPath(source.Path); + if (verbose) + { + Console.WriteLine(" Auto-discover mode enabled, scanning for JSON files..."); + } + + var discoveredSources = DiscoverSourceFiles(configDir, config, verbose); + config.Sources = discoveredSources; - // Then resolve relative paths - if (!Path.IsPathRooted(source.Path)) + if (config.Sources.Count == 0) + { + throw new ConfigurationException("No source files found in auto-discover mode."); + } + } + else + { + // Validate required fields for explicit sources mode + ValidateConfiguration(config); + + // Resolve relative paths for explicit sources + foreach (var source in config.Sources) { - source.Path = Path.GetFullPath(Path.Combine(configDir, source.Path)); + // First expand tilde paths + source.Path = PathExpander.ExpandPath(source.Path); + + // Then resolve relative paths + if (!Path.IsPathRooted(source.Path)) + { + source.Path = Path.GetFullPath(Path.Combine(configDir, source.Path)); + } } } @@ -254,11 +276,109 @@ private static async Task LoadConfigurationAsync(FileInfo co Console.WriteLine($" Sources: {config.Sources.Count}"); Console.WriteLine($" Output: {config.Output}"); Console.WriteLine($" Schema Conflict Strategy: {config.SchemaConflict}"); + Console.WriteLine($" Auto-Discover: {config.AutoDiscover}"); + if (config.ExcludePatterns.Count > 0) + { + Console.WriteLine($" Exclude Patterns: {string.Join(", ", config.ExcludePatterns)}"); + } } return config; } + /// + /// Discovers source files in the specified directory based on configuration. + /// + private static List DiscoverSourceFiles(string directory, MergeConfiguration config, bool verbose) + { + var sources = new List(); + var outputFileName = Path.GetFileName(config.Output); + + // Get all JSON files in the directory + var jsonFiles = Directory.GetFiles(directory, "*.json", SearchOption.TopDirectoryOnly); + + foreach (var filePath in jsonFiles) + { + var fileName = Path.GetFileName(filePath); + + // Skip config.json + if (fileName.Equals("config.json", StringComparison.OrdinalIgnoreCase)) + { + if (verbose) + { + Console.WriteLine($" Skipping config file: {fileName}"); + } + continue; + } + + // Skip output file + if (fileName.Equals(outputFileName, StringComparison.OrdinalIgnoreCase)) + { + if (verbose) + { + Console.WriteLine($" Skipping output file: {fileName}"); + } + continue; + } + + // Check exclude patterns + if (MatchesExcludePattern(fileName, config.ExcludePatterns)) + { + if (verbose) + { + Console.WriteLine($" Excluding (pattern match): {fileName}"); + } + continue; + } + + if (verbose) + { + Console.WriteLine($" Discovered: {fileName}"); + } + + sources.Add(new SourceConfiguration + { + Path = filePath, + Name = Path.GetFileNameWithoutExtension(fileName) + }); + } + + // Sort sources by name for deterministic ordering + sources.Sort((a, b) => string.Compare(a.Name, b.Name, StringComparison.Ordinal)); + + return sources; + } + + /// + /// Checks if a filename matches any of the exclude patterns. + /// Supports simple glob patterns: * (any characters), ? (single character) + /// + internal static bool MatchesExcludePattern(string fileName, List excludePatterns) + { + foreach (var pattern in excludePatterns) + { + if (MatchesGlobPattern(fileName, pattern)) + { + return true; + } + } + return false; + } + + /// + /// Matches a filename against a simple glob pattern. + /// Supports * (any characters) and ? (single character). + /// + internal static bool MatchesGlobPattern(string fileName, string pattern) + { + // Convert glob pattern to regex + var regexPattern = "^" + Regex.Escape(pattern) + .Replace("\\*", ".*") + .Replace("\\?", ".") + "$"; + + return Regex.IsMatch(fileName, regexPattern, RegexOptions.IgnoreCase); + } + private static void ValidateConfiguration(MergeConfiguration config) { var missingFields = new List(); diff --git a/Oproto.Lambda.OpenApi.Merge.Tool/Oproto.Lambda.OpenApi.Merge.Tool.csproj b/Oproto.Lambda.OpenApi.Merge.Tool/Oproto.Lambda.OpenApi.Merge.Tool.csproj index ee34ccb..355cfdd 100644 --- a/Oproto.Lambda.OpenApi.Merge.Tool/Oproto.Lambda.OpenApi.Merge.Tool.csproj +++ b/Oproto.Lambda.OpenApi.Merge.Tool/Oproto.Lambda.OpenApi.Merge.Tool.csproj @@ -19,6 +19,10 @@ PACKAGE_README.md + + + + diff --git a/Oproto.Lambda.OpenApi.Merge/MergeConfiguration.cs b/Oproto.Lambda.OpenApi.Merge/MergeConfiguration.cs index 9a019ed..090033f 100644 --- a/Oproto.Lambda.OpenApi.Merge/MergeConfiguration.cs +++ b/Oproto.Lambda.OpenApi.Merge/MergeConfiguration.cs @@ -38,6 +38,22 @@ public class MergeConfiguration [JsonPropertyName("schemaConflict")] [JsonConverter(typeof(SchemaConflictStrategyConverter))] public SchemaConflictStrategy SchemaConflict { get; set; } = SchemaConflictStrategy.Rename; + + /// + /// Whether to auto-discover source files in the directory. + /// When true, ignores the sources list and discovers all .json files. + /// Default: false (use explicit sources list). + /// + [JsonPropertyName("autoDiscover")] + public bool AutoDiscover { get; set; } = false; + + /// + /// Glob patterns for files to exclude from auto-discovery. + /// Only used when autoDiscover is true. + /// Always excludes the output file automatically. + /// + [JsonPropertyName("excludePatterns")] + public List ExcludePatterns { get; set; } = new List(); } /// diff --git a/Oproto.Lambda.OpenApi.Merge/OpenApiDocumentSorter.cs b/Oproto.Lambda.OpenApi.Merge/OpenApiDocumentSorter.cs index 2e0fec3..898ca19 100644 --- a/Oproto.Lambda.OpenApi.Merge/OpenApiDocumentSorter.cs +++ b/Oproto.Lambda.OpenApi.Merge/OpenApiDocumentSorter.cs @@ -3,6 +3,7 @@ namespace Oproto.Lambda.OpenApi.Merge; using System; using System.Collections.Generic; using System.Linq; +using System.Runtime.CompilerServices; using Microsoft.OpenApi.Any; using Microsoft.OpenApi.Models; @@ -108,10 +109,11 @@ internal static IDictionary SortSchemas(IDictionary(); + var visited = new HashSet(ReferenceEqualityComparer.Instance); var sortedSchemas = new Dictionary(); foreach (var schema in schemas.OrderBy(s => s.Key, StringComparer.Ordinal)) { - sortedSchemas[schema.Key] = SortSchemaProperties(schema.Value); + sortedSchemas[schema.Key] = SortSchemaProperties(schema.Value, visited); } return sortedSchemas; } @@ -122,17 +124,32 @@ internal static IDictionary SortSchemas(IDictionaryThe schema to sort. /// The schema with sorted properties. internal static OpenApiSchema SortSchemaProperties(OpenApiSchema schema) + { + return SortSchemaProperties(schema, new HashSet(ReferenceEqualityComparer.Instance)); + } + + /// + /// Sorts properties within a schema alphabetically with cycle detection. + /// + /// The schema to sort. + /// Set of already visited schemas to prevent infinite recursion. + /// The schema with sorted properties. + private static OpenApiSchema SortSchemaProperties(OpenApiSchema schema, HashSet visited) { if (schema == null) return new OpenApiSchema(); + // Detect cycles - if we've already visited this schema instance, return it as-is + if (!visited.Add(schema)) + return schema; + if (schema.Properties != null && schema.Properties.Count > 0) { var sortedProperties = new Dictionary(); foreach (var prop in schema.Properties.OrderBy(p => p.Key, StringComparer.Ordinal)) { // Recursively sort nested schema properties - sortedProperties[prop.Key] = SortSchemaProperties(prop.Value); + sortedProperties[prop.Key] = SortSchemaProperties(prop.Value, visited); } schema.Properties = sortedProperties; } @@ -140,32 +157,43 @@ internal static OpenApiSchema SortSchemaProperties(OpenApiSchema schema) // Sort items schema if present (for arrays) if (schema.Items != null) { - schema.Items = SortSchemaProperties(schema.Items); + schema.Items = SortSchemaProperties(schema.Items, visited); } // Sort additionalProperties schema if present if (schema.AdditionalProperties != null) { - schema.AdditionalProperties = SortSchemaProperties(schema.AdditionalProperties); + schema.AdditionalProperties = SortSchemaProperties(schema.AdditionalProperties, visited); } // Sort allOf, oneOf, anyOf schemas if (schema.AllOf != null && schema.AllOf.Count > 0) { - schema.AllOf = schema.AllOf.Select(SortSchemaProperties).ToList(); + schema.AllOf = schema.AllOf.Select(s => SortSchemaProperties(s, visited)).ToList(); } if (schema.OneOf != null && schema.OneOf.Count > 0) { - schema.OneOf = schema.OneOf.Select(SortSchemaProperties).ToList(); + schema.OneOf = schema.OneOf.Select(s => SortSchemaProperties(s, visited)).ToList(); } if (schema.AnyOf != null && schema.AnyOf.Count > 0) { - schema.AnyOf = schema.AnyOf.Select(SortSchemaProperties).ToList(); + schema.AnyOf = schema.AnyOf.Select(s => SortSchemaProperties(s, visited)).ToList(); } return schema; } + /// + /// Reference equality comparer for OpenApiSchema to detect circular references. + /// + private sealed class ReferenceEqualityComparer : IEqualityComparer + { + public static readonly ReferenceEqualityComparer Instance = new(); + + public bool Equals(OpenApiSchema? x, OpenApiSchema? y) => ReferenceEquals(x, y); + public int GetHashCode(OpenApiSchema obj) => RuntimeHelpers.GetHashCode(obj); + } + /// /// Sorts tags alphabetically by name. diff --git a/Oproto.Lambda.OpenApi.sln b/Oproto.Lambda.OpenApi.sln index ad3f2d1..5a6480c 100644 --- a/Oproto.Lambda.OpenApi.sln +++ b/Oproto.Lambda.OpenApi.sln @@ -17,6 +17,12 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Oproto.Lambda.OpenApi.Merge EndProject Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Oproto.Lambda.OpenApi.Merge.Tests", "Oproto.Lambda.OpenApi.Merge.Tests\Oproto.Lambda.OpenApi.Merge.Tests.csproj", "{B63B2424-1484-422D-8B23-CE73F867583C}" EndProject +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Oproto.Lambda.OpenApi.Merge.Lambda", "Oproto.Lambda.OpenApi.Merge.Lambda\Oproto.Lambda.OpenApi.Merge.Lambda.csproj", "{7C44294E-E0E1-4983-A046-BF8977FAFB3A}" +EndProject +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Oproto.Lambda.OpenApi.Merge.Lambda.Tests", "Oproto.Lambda.OpenApi.Merge.Lambda.Tests\Oproto.Lambda.OpenApi.Merge.Lambda.Tests.csproj", "{635C8E7F-2D69-4D40-81D7-9A0564745F6C}" +EndProject +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Oproto.Lambda.OpenApi.Merge.Cdk", "Oproto.Lambda.OpenApi.Merge.Cdk\Oproto.Lambda.OpenApi.Merge.Cdk.csproj", "{4E8C0F0B-9FFB-47A9-BFDD-20CADAE39F18}" +EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU @@ -55,5 +61,17 @@ Global {B63B2424-1484-422D-8B23-CE73F867583C}.Debug|Any CPU.Build.0 = Debug|Any CPU {B63B2424-1484-422D-8B23-CE73F867583C}.Release|Any CPU.ActiveCfg = Release|Any CPU {B63B2424-1484-422D-8B23-CE73F867583C}.Release|Any CPU.Build.0 = Release|Any CPU + {7C44294E-E0E1-4983-A046-BF8977FAFB3A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {7C44294E-E0E1-4983-A046-BF8977FAFB3A}.Debug|Any CPU.Build.0 = Debug|Any CPU + {7C44294E-E0E1-4983-A046-BF8977FAFB3A}.Release|Any CPU.ActiveCfg = Release|Any CPU + {7C44294E-E0E1-4983-A046-BF8977FAFB3A}.Release|Any CPU.Build.0 = Release|Any CPU + {635C8E7F-2D69-4D40-81D7-9A0564745F6C}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {635C8E7F-2D69-4D40-81D7-9A0564745F6C}.Debug|Any CPU.Build.0 = Debug|Any CPU + {635C8E7F-2D69-4D40-81D7-9A0564745F6C}.Release|Any CPU.ActiveCfg = Release|Any CPU + {635C8E7F-2D69-4D40-81D7-9A0564745F6C}.Release|Any CPU.Build.0 = Release|Any CPU + {4E8C0F0B-9FFB-47A9-BFDD-20CADAE39F18}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {4E8C0F0B-9FFB-47A9-BFDD-20CADAE39F18}.Debug|Any CPU.Build.0 = Debug|Any CPU + {4E8C0F0B-9FFB-47A9-BFDD-20CADAE39F18}.Release|Any CPU.ActiveCfg = Release|Any CPU + {4E8C0F0B-9FFB-47A9-BFDD-20CADAE39F18}.Release|Any CPU.Build.0 = Release|Any CPU EndGlobalSection EndGlobal diff --git a/README.md b/README.md index 1bf1e0c..3ce1e9f 100644 --- a/README.md +++ b/README.md @@ -26,6 +26,30 @@ A .NET source generator that automatically creates OpenAPI specifications from A The ecosystem also includes a powerful merge tool for combining multiple OpenAPI specifications into a single unified document. This is ideal for microservice architectures where each service generates its own OpenAPI spec. +### Lambda Merge Tool + +For automated workflows, the Lambda Merge Tool provides an AWS Lambda-based solution that automatically merges OpenAPI specs when changes are detected in S3. + +**Features:** +- Automatic merging triggered by S3 events +- Step Functions-based debouncing for rapid changes +- Conditional writes (only updates when content changes) +- CloudWatch metrics and alarms + +**Quick Start with CDK:** + +```csharp +var mergeConstruct = new OpenApiMergeConstruct(this, "OpenApiMerge", new OpenApiMergeConstructProps +{ + InputBucket = bucket, + ApiPrefixes = new[] { "publicapi/", "internalapi/" } +}); +``` + +For detailed documentation, see [Lambda Merge Tool Documentation](docs/lambda-merge.md). + +### CLI Merge Tool + ## Project Website and Documentation Documentation, examples and more are located on [LambdaOpenApi](https://lambdaopenapi.dev) website. @@ -95,7 +119,8 @@ public async Task GetUser( - [Getting Started Guide](docs/getting-started.md) - [Attribute Reference](docs/attributes.md) - [Configuration Options](docs/configuration.md) -- [Merge Tool](docs/merge-tool.md) +- [Merge Tool CLI](docs/merge-tool.md) +- [Lambda Merge Tool](docs/lambda-merge.md) - [Examples](Oproto.Lambda.OpenApi.Examples/) - [Changelog](CHANGELOG.md) diff --git a/docs/lambda-merge.md b/docs/lambda-merge.md new file mode 100644 index 0000000..bb3f2e8 --- /dev/null +++ b/docs/lambda-merge.md @@ -0,0 +1,486 @@ +# Lambda Merge Tool + +The Lambda Merge Tool provides an AWS Lambda-based solution for automatically merging OpenAPI specification files when changes are detected in S3. This is ideal for CI/CD pipelines and automated documentation workflows where you want merged API specs to stay current without manual intervention. + +## Features + +- **Automatic Merging**: Triggers automatically when OpenAPI spec files are uploaded or modified in S3 +- **Debouncing**: Batches rapid successive changes into a single merge operation using Step Functions +- **Flexible Configuration**: Supports both auto-discovery and explicit source file listing +- **Conditional Writes**: Only writes output when the merged result differs from existing output +- **Multi-API Support**: Single deployment can handle multiple API prefixes +- **CloudWatch Integration**: Built-in metrics and configurable alarms + +## Architecture + +``` +┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐ ┌──────────────┐ +│ S3 Bucket │────▶│ EventBridge │────▶│ Step Functions │────▶│ Lambda │ +│ (Input Files) │ │ Rule │ │ (Debounce) │ │ (Merge) │ +└─────────────────┘ └──────────────┘ └─────────────────┘ └──────────────┘ + │ + ▼ + ┌──────────────┐ + │ S3 Bucket │ + │ (Output) │ + └──────────────┘ +``` + +1. **S3 Event**: User uploads/modifies a file in `{prefix}/` +2. **EventBridge Rule**: Filters events by prefix pattern, triggers Step Functions +3. **Debounce State Machine**: Waits for configurable duration, resets on new events +4. **Merge Lambda**: Loads config, discovers/loads sources, merges, compares, writes if changed + +## Deployment Options + +### Option 1: CDK (Recommended) + +Install the CDK construct package: + +```bash +dotnet add package Oproto.Lambda.OpenApi.Merge.Cdk +``` + +Add the construct to your CDK stack: + +```csharp +using Amazon.CDK; +using Amazon.CDK.AWS.S3; +using Oproto.Lambda.OpenApi.Merge.Cdk; + +public class MyStack : Stack +{ + public MyStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props) + { + var bucket = new Bucket(this, "ApiBucket"); + + var mergeConstruct = new OpenApiMergeConstruct(this, "OpenApiMerge", new OpenApiMergeConstructProps + { + InputBucket = bucket, + ApiPrefixes = new[] { "publicapi/", "internalapi/" }, + DebounceSeconds = 5, + EnableAlarms = true + }); + + // Access outputs + new CfnOutput(this, "MergeFunctionArn", new CfnOutputProps + { + Value = mergeConstruct.MergeFunction.FunctionArn + }); + } +} +``` + +#### CDK Construct Properties + +| Property | Type | Default | Description | +|----------|------|---------|-------------| +| `InputBucket` | `IBucket` | Required | S3 bucket containing input files | +| `OutputBucket` | `IBucket` | InputBucket | S3 bucket for output files | +| `ApiPrefixes` | `string[]` | Required | List of API prefixes to monitor | +| `DebounceSeconds` | `int` | 5 | Wait time before triggering merge | +| `EnableAlarms` | `bool` | true | Create CloudWatch alarms | +| `AlarmThreshold` | `int` | 1 | Failure count threshold | +| `AlarmEvaluationPeriods` | `int` | 1 | Number of evaluation periods | +| `AlarmTopic` | `ITopic` | null | SNS topic for alarm notifications | +| `MemorySize` | `int` | 512 | Lambda memory size in MB | +| `TimeoutSeconds` | `int` | 60 | Lambda timeout in seconds | + +### Option 2: CloudFormation + +For users who don't use CDK, a standalone CloudFormation template is available. + +#### Step 1: Build and Package the Lambda + +```bash +# Build the Lambda project +dotnet publish Oproto.Lambda.OpenApi.Merge.Lambda -c Release -o ./publish + +# Create deployment package +cd publish && zip -r ../lambda-package.zip . && cd .. +``` + +#### Step 2: Upload to S3 + +```bash +aws s3 cp lambda-package.zip s3://your-deployment-bucket/openapi-merge/lambda-package.zip +``` + +#### Step 3: Deploy the Stack + +```bash +aws cloudformation create-stack \ + --stack-name openapi-merge \ + --template-body file://Oproto.Lambda.OpenApi.Merge.Cdk/cloudformation/openapi-merge.yaml \ + --parameters \ + ParameterKey=InputBucketName,ParameterValue=your-api-specs-bucket \ + ParameterKey=LambdaCodeS3Bucket,ParameterValue=your-deployment-bucket \ + ParameterKey=LambdaCodeS3Key,ParameterValue=openapi-merge/lambda-package.zip \ + --capabilities CAPABILITY_NAMED_IAM +``` + +#### CloudFormation Parameters + +| Parameter | Description | Default | +|-----------|-------------|---------| +| `InputBucketName` | S3 bucket containing input files (required) | - | +| `OutputBucketName` | S3 bucket for output files (optional) | Same as input | +| `ApiPrefixes` | Comma-separated list of API prefixes | '' | +| `LambdaCodeS3Bucket` | S3 bucket with Lambda package (required) | - | +| `LambdaCodeS3Key` | S3 key for Lambda package (required) | - | +| `MemorySize` | Lambda memory size in MB | 512 | +| `TimeoutSeconds` | Lambda timeout in seconds | 60 | +| `DebounceSeconds` | Debounce wait time in seconds | 5 | +| `EnableAlarms` | Create CloudWatch alarms | 'true' | +| `AlarmThreshold` | Failure count threshold | 1 | +| `AlarmEvaluationPeriods` | Evaluation periods for alarms | 1 | +| `AlarmSnsTopicArn` | SNS topic for alarm notifications | '' | + +## Configuration File Format + +Each API prefix requires a `config.json` file that defines how the merge should be performed. + +### File Location + +Place the config file at `{prefix}/config.json` in your S3 bucket. For example: +- `publicapi/config.json` +- `internalapi/config.json` + +### Configuration Schema + +```json +{ + "info": { + "title": "string (required)", + "version": "string (required)", + "description": "string (optional)" + }, + "servers": [ + { + "url": "string (required)", + "description": "string (optional)" + } + ], + "autoDiscover": "boolean (optional, default: false)", + "excludePatterns": ["string (optional)"], + "sources": [ + { + "path": "string (required when autoDiscover is false)", + "pathPrefix": "string (optional)", + "operationIdPrefix": "string (optional)", + "name": "string (optional)" + } + ], + "output": "string (required)", + "outputBucket": "string (optional, Lambda-only)", + "schemaConflict": "rename | first-wins | fail (optional, default: rename)" +} +``` + +### Example: Auto-Discovery Mode + +When `autoDiscover` is true, the Lambda automatically finds all `.json` files in the prefix directory (excluding `config.json` and the output file). + +```json +{ + "info": { + "title": "Public API", + "version": "1.0.0", + "description": "Merged public API specification" + }, + "servers": [ + { + "url": "https://api.example.com/v1", + "description": "Production" + } + ], + "autoDiscover": true, + "excludePatterns": ["*-draft.json", "*.backup.json"], + "output": "merged-openapi.json", + "schemaConflict": "rename" +} +``` + +### Example: Explicit Sources Mode + +When `autoDiscover` is false (default), you must explicitly list the source files. + +```json +{ + "info": { + "title": "Internal API", + "version": "2.0.0" + }, + "autoDiscover": false, + "sources": [ + { + "path": "users-service.json", + "name": "Users", + "pathPrefix": "/users" + }, + { + "path": "orders-service.json", + "name": "Orders", + "pathPrefix": "/orders" + } + ], + "output": "internal-api.json", + "schemaConflict": "rename" +} +``` + +### Example: Dual-Bucket Configuration + +Write output to a different bucket: + +```json +{ + "info": { + "title": "My API", + "version": "1.0.0" + }, + "autoDiscover": true, + "output": "api-docs/merged.json", + "outputBucket": "my-documentation-bucket" +} +``` + +## S3 Bucket Structure + +### Single-Bucket Mode with Separate Output Prefix (Recommended) + +You can specify a full path for the output to write it to a different prefix, avoiding the re-trigger issue: + +``` +my-api-bucket/ +├── publicapi/ +│ ├── config.json # Merge configuration +│ ├── users-service.json # Source spec +│ └── orders-service.json # Source spec +└── output/ + └── publicapi/ + └── merged-openapi.json # Output (not in monitored prefix) +``` + +Config file: +```json +{ + "info": { "title": "Public API", "version": "1.0.0" }, + "autoDiscover": true, + "output": "output/publicapi/merged-openapi.json" +} +``` + +When the `output` value contains a `/`, it's treated as a full S3 key (not relative to the prefix). This lets you write output to any location in the bucket. + +### Single-Bucket Mode with Same Prefix + +When using a simple filename (no `/`), the output is written to the same prefix as the sources. This triggers another S3 event, but the system handles it gracefully: + +1. **Conditional writes** - Only writes when content actually changes +2. **Debouncing** - Batches rapid events together +3. **Idempotent merges** - Re-merging produces identical output, so no second write occurs + +This results in one extra Step Functions execution per merge (the re-triggered one exits without writing). + +``` +my-api-bucket/ +├── publicapi/ +│ ├── config.json # Merge configuration +│ ├── users-service.json # Source spec +│ ├── orders-service.json # Source spec +│ └── merged-openapi.json # Output (triggers re-merge, but no write) +``` + +Config file: +```json +{ + "info": { "title": "Public API", "version": "1.0.0" }, + "autoDiscover": true, + "output": "merged-openapi.json" +} +``` + +### Dual-Bucket Mode (Recommended for Production) + +Using separate buckets for input and output completely eliminates the re-trigger issue: + +Input bucket: +``` +input-bucket/ +└── publicapi/ + ├── config.json + ├── users-service.json + └── orders-service.json +``` + +Output bucket: +``` +output-bucket/ +└── publicapi/ + └── merged-openapi.json +``` + +## Debounce Behavior + +The debounce mechanism prevents excessive merge operations when multiple files are uploaded in quick succession. + +### How It Works + +1. When an S3 event occurs, the Step Functions state machine starts a timer +2. If another event occurs for the same prefix during the wait period, the timer resets +3. When the timer expires without new events, the merge Lambda is invoked +4. If events arrive during merge execution, another merge is triggered after completion + +### Timing Considerations + +- **Default debounce**: 5 seconds +- **Recommended for CI/CD**: 5-10 seconds (allows batch uploads to complete) +- **Recommended for manual uploads**: 2-5 seconds (faster feedback) + +### Post-Merge Event Handling + +The state machine checks for events that arrived during merge execution. If new events are detected, it loops back and performs another merge to ensure no changes are missed. + +``` +Event 1 ──▶ Wait 5s ──▶ Merge ──▶ Check for new events ──▶ Done + │ +Event 2 (during merge) ◀─────────────────┘ + │ + ▼ + Wait 5s ──▶ Merge ──▶ Done +``` + +## Monitoring and Observability + +### CloudWatch Metrics + +The Lambda emits the following metrics to the `OpenApiMerge` namespace: + +| Metric | Description | +|--------|-------------| +| `MergeDuration` | Time taken to complete merge (milliseconds) | +| `MergeSuccess` | Count of successful merges | +| `MergeFailures` | Count of failed merges | +| `FilesProcessed` | Number of source files processed | + +### CloudWatch Logs + +The Lambda logs detailed information about each merge operation: + +- Merge start and completion with timing +- S3 read and write operations +- Warnings for skipped files +- Full error details with stack traces + +### CloudWatch Alarms + +When `EnableAlarms` is true, an alarm is created for merge failures: + +- **Alarm Name**: `{stack-name}-merge-failures` +- **Threshold**: Configurable (default: 1 failure) +- **Period**: 5 minutes +- **Action**: Optional SNS notification + +## Troubleshooting + +### Common Issues + +#### Extra Step Functions executions in single-bucket mode + +**Cause**: When using single-bucket mode with output in the same prefix, writing the merged output triggers another S3 event, which starts another Step Functions execution. + +**Behavior**: The re-triggered execution will: +1. Load config and discover sources +2. Perform the merge (producing identical output) +3. Compare with existing output (finds no changes) +4. Skip writing (no actual S3 write occurs) +5. Exit cleanly + +**Impact**: One extra Step Functions execution per merge. No infinite loop occurs because the second merge doesn't write anything. + +**Solution**: For high-volume scenarios, use dual-bucket mode to eliminate this overhead entirely. + +#### Config file not found + +``` +Error: Configuration file not found at publicapi/config.json +``` + +**Solution**: Ensure `config.json` exists at the correct prefix path in your S3 bucket. + +#### Invalid JSON in config + +``` +Error: Invalid JSON: Unexpected character at position 42 +``` + +**Solution**: Validate your `config.json` with a JSON linter. + +#### No valid source files found + +``` +Error: No valid source files found +``` + +**Solution**: +- If using `autoDiscover: true`, ensure `.json` files exist in the prefix +- If using explicit sources, verify the file paths are correct +- Check that files aren't excluded by `excludePatterns` + +#### Schema conflict (with fail strategy) + +``` +Error: Schema conflict: 'Response' is defined differently in 'Users Service' and 'Products Service' +``` + +**Solution**: Use `schemaConflict: "rename"` or `"first-wins"`, or manually resolve the conflict in your source specs. + +#### Access denied + +``` +Error: Access denied to my-bucket/publicapi/config.json +``` + +**Solution**: Verify the Lambda's IAM role has `s3:GetObject` permission on the input bucket and `s3:PutObject` on the output bucket. + +### Debugging Tips + +1. **Check CloudWatch Logs**: The Lambda logs detailed information about each step +2. **Verify S3 Event Notifications**: Ensure EventBridge is receiving S3 events +3. **Check Step Functions Execution**: View the state machine execution history in the AWS Console +4. **Test Locally**: Use the CLI merge tool to test your configuration before deploying + +### Step Functions Execution States + +| State | Description | +|-------|-------------| +| `ExtractPrefix` | Extracts API prefix from S3 key | +| `CheckExistingExecution` | Checks if another execution is handling this prefix | +| `WaitForDebounce` | Waits for debounce period | +| `InvokeMergeLambda` | Invokes the merge Lambda | +| `CheckPostMergeEvents` | Checks for events that arrived during merge | +| `CleanupExecution` | Removes debounce state from DynamoDB | + +## Best Practices + +1. **Use dual-bucket mode for high-volume scenarios**: If you have frequent updates, using a separate output bucket eliminates the re-trigger overhead entirely + +2. **Use meaningful prefixes**: Organize APIs by domain or team (e.g., `payments/`, `users/`, `admin/`) + +2. **Set appropriate debounce**: Balance between responsiveness and efficiency based on your upload patterns + +3. **Use auto-discover for simple setups**: When all specs in a prefix should be merged + +4. **Use explicit sources for control**: When you need path prefixes or want to exclude certain files + +5. **Monitor merge failures**: Set up SNS notifications for the CloudWatch alarm + +6. **Version control configs**: Keep your `config.json` files in version control alongside your source specs + +7. **Test with CLI first**: Use the CLI merge tool to validate your configuration before deploying to Lambda + +## Related Documentation + +- [Merge Tool CLI](merge-tool.md) - Command-line merge tool documentation +- [CDK Construct README](../Oproto.Lambda.OpenApi.Merge.Cdk/README.md) - CDK-specific documentation diff --git a/docs/merge-tool.md b/docs/merge-tool.md index cddea9a..0598389 100644 --- a/docs/merge-tool.md +++ b/docs/merge-tool.md @@ -108,9 +108,11 @@ The configuration file provides full control over the merge process, including p "description": "string (optional)" } ], + "autoDiscover": "boolean (optional, default: false)", + "excludePatterns": ["string (optional)"], "sources": [ { - "path": "string (required)", + "path": "string (required when autoDiscover is false)", "pathPrefix": "string (optional)", "operationIdPrefix": "string (optional)", "name": "string (optional)" @@ -158,6 +160,37 @@ The configuration file provides full control over the merge process, including p } ``` +### Example Configuration with Auto-Discovery + +When you want to automatically merge all OpenAPI specs in a directory without listing them explicitly: + +```json +{ + "info": { + "title": "Platform API", + "version": "1.0.0", + "description": "Auto-discovered API specifications" + }, + "servers": [ + { "url": "https://api.example.com", "description": "Production" } + ], + "autoDiscover": true, + "excludePatterns": [ + "*-draft.json", + "*.backup.json", + "test-*.json" + ], + "output": "./merged-openapi.json", + "schemaConflict": "rename" +} +``` + +This configuration will: +1. Find all `.json` files in the same directory as the config file +2. Exclude any files matching the patterns in `excludePatterns` +3. Automatically exclude the config file itself and the output file +4. Merge all remaining files into `merged-openapi.json` + ### Configuration Properties #### info (required) @@ -185,6 +218,28 @@ Array of source specifications to merge: File path for the merged specification output. Supports tilde expansion (see below). +#### autoDiscover (optional) + +When set to `true`, the merge tool automatically discovers all `.json` files in the same directory as the configuration file, instead of using the explicit `sources` list. Default is `false`. + +- Automatically excludes `config.json` (or whatever the config file is named) +- Automatically excludes the output file +- Respects `excludePatterns` for additional filtering + +#### excludePatterns (optional) + +Array of glob patterns for files to exclude from auto-discovery. Only used when `autoDiscover` is `true`. + +Supported glob patterns: +- `*` matches any characters except path separators +- `**` matches any characters including path separators +- `?` matches a single character + +Examples: +- `*-draft.json` - excludes files ending with `-draft.json` +- `*.backup.json` - excludes files ending with `.backup.json` +- `test-*.json` - excludes files starting with `test-` + #### schemaConflict (optional) Strategy for handling schema naming conflicts. Default is `rename`.