Contributing to oastools
Thank you for your interest in contributing to oastools! This document provides everything you need to know to contribute effectively.
Table of Contents
- Quick Start
- Development Workflow
- Project Architecture
- Code Standards
- Testing Requirements
- Submitting Changes
- CI/CD and Automation
- Getting Help
Quick Start
Prerequisites
- Go 1.24+ - Required for development
- golangci-lint - For linting (optional but recommended)
- gotestsum - For better test output formatting (optional)
Clone and Build
# Clone the repository
git clone https://github.com/erraggy/oastools.git
cd oastools
# Install dependencies
make deps
# Build the binary
make build
# Run tests
make test
# Run all quality checks
make check
Development Workflow
The Golden Rule: Always Run make check
After making changes, always run:
This command runs:
1. go mod tidy - Clean up dependencies
2. go fmt - Format code
3. golangci-lint run - Lint code
4. go test with race detection - Run tests
5. git status - Show what changed
Common Development Commands
# Build the binary (outputs to bin/oastools)
make build
# Install to $GOPATH/bin
make install
# Run tests with coverage
make test
# Generate HTML coverage report
make test-coverage
# Format code
make fmt
# Run linter
make lint
# Clean build artifacts
make clean
Development Loop
- Make your changes
- Run
make checkto validate - Fix any issues reported
- Commit your changes
- Create a pull request
Project Architecture
What is oastools?
oastools is a Go-based CLI tool and library for working with OpenAPI Specification (OAS) files. It provides:
- Validation - Ensure OAS files conform to specifications
- Parsing - Load and analyze OAS documents
- Joining - Combine multiple OAS files
- Converting - Transform between OAS versions (2.0 ↔ 3.x)
- Diffing - Compare specs and detect breaking changes
Directory Structure
oastools/
├── cmd/oastools/ # CLI entry point
│ └── main.go # Command dispatcher
├── parser/ # Parse OAS files (public API)
├── validator/ # Validate OAS files (public API)
├── joiner/ # Join multiple OAS files (public API)
├── converter/ # Convert between OAS versions (public API)
├── differ/ # Compare OAS files (public API)
├── internal/ # Internal utilities (not public API)
│ ├── httputil/ # HTTP constants and validation
│ ├── severity/ # Issue severity levels
│ ├── issues/ # Unified issue reporting
│ └── testutil/ # Test helpers
└── testdata/ # Test fixtures
Public vs Internal Packages
Public packages (can be imported by external projects):
- parser - Parse OpenAPI specifications
- validator - Validate OpenAPI specifications
- joiner - Join multiple specifications
- converter - Convert between versions
- differ - Compare specifications
Internal packages (project-only):
- internal/* - Shared utilities not exposed to external users
Design Principles
- Public API First - Core functionality is exposed as importable Go packages
- Separation of Concerns - Each package has one responsibility
- Format Preservation - Input format (JSON/YAML) is automatically preserved
- Comprehensive Documentation - Every public package has
doc.goandexample_test.go - Testability - High test coverage required for all exported functionality
Code Standards
Go Style Guidelines
- Follow standard Go conventions
- Use
gofmtfor formatting (run viamake fmt) - Pass
golangci-lintchecks (run viamake lint) - Use meaningful variable and function names
- Write self-documenting code; add comments only where logic isn't obvious
Documentation Requirements
All exported functionality must have:
- Godoc comments - Describe what it does
- Package-level docs - Update
doc.goif adding new public APIs - Runnable examples - Add to
example_test.gofor new features
Example:
// Parse parses an OpenAPI specification file from the given path.
// It automatically detects the file format (JSON or YAML) and validates
// the document structure if validateStructure is true.
func Parse(specPath string, resolveRefs bool, validateStructure bool) (*ParseResult, error) {
// Implementation
}
Constant Usage
Always use package-level constants instead of string literals:
// ❌ Bad - hardcoded strings
if method == "get" { ... }
// ✅ Good - use constants
if method == httputil.MethodGet { ... }
This ensures: - Single source of truth - Type safety - Easy refactoring - Clear intent
Format Preservation
IMPORTANT: The parser, converter, and joiner automatically preserve input file format.
- Input JSON → Output JSON
- Input YAML → Output YAML
This is handled automatically via the SourceFormat field in ParseResult. When writing new features:
- Don't manually choose output format
- Do use
result.SourceFormatto determine marshaling - Test both JSON and YAML format preservation
Error Handling
- Return errors; don't panic (except for programmer errors)
- Use
fmt.Errorfwith%wfor error wrapping - Provide context in error messages
Testing Requirements
Coverage Expectations
All exported functionality MUST have comprehensive test coverage.
This includes:
- ✅ Exported functions (e.g., parser.Parse())
- ✅ Exported methods (e.g., Parser.Parse())
- ✅ Exported types and their fields
- ✅ Exported constants and variables
Test Types Required
- Positive tests - Valid inputs produce expected outputs
- Negative tests - Invalid inputs produce appropriate errors
- Edge cases - Boundary conditions, empty inputs, nil values
- Integration tests - Multiple components working together
Test Naming Convention
// Package-level convenience functions
func TestParseConvenience(t *testing.T) { ... }
// Struct methods
func TestParserParse(t *testing.T) { ... }
// Specific features
func TestJSONFormatPreservation(t *testing.T) { ... }
Benchmark Tests
Use the Go 1.24+ for b.Loop() pattern:
func BenchmarkParse(b *testing.B) {
// Setup
specPath := "testdata/petstore.yaml"
// Benchmark loop
for b.Loop() {
_, err := Parse(specPath, false, true)
if err != nil {
b.Fatal(err)
}
}
}
Don't use:
- ❌ for i := 0; i < b.N; i++ (old pattern)
- ❌ b.ReportAllocs() (handled automatically by b.Loop())
Running Tests
# Run all tests
make test
# Run tests with coverage
make test-coverage
# Run specific package tests
go test ./parser/...
# Run specific test
go test ./parser -run TestParse
# Run benchmarks
go test -bench=. ./parser
Submitting Changes
Before You Commit
- ✅ Run
make checkand ensure it passes - ✅ Add tests for new functionality
- ✅ Update documentation (godoc, doc.go, example_test.go)
- ✅ Verify test coverage is sufficient
- ✅ Update benchmarks with
make bench-savefor performance-impacting changes
Commit Message Format
Use conventional commit format:
Types:
- feat - New feature
- fix - Bug fix
- docs - Documentation only
- test - Adding/updating tests
- refactor - Code restructuring (no behavior change)
- perf - Performance improvements
- chore - Maintenance tasks
Examples:
feat(parser): add support for OAS 3.2.0
Implemented parsing logic for the new OAS 3.2.0 specification,
including support for the updated JSON Schema Draft 2020-12
alignment and new spec features.
- Added version detection for 3.2.0
- Updated schema validation
- Added test fixtures for 3.2.0
fix(converter): handle nullable types in OAS 3.1 conversion
Fixed conversion of OAS 3.1 nullable types that use type arrays
instead of the deprecated nullable field.
Fixes #123
Skipping Automated Code Review
You can skip the Claude Code Review workflow for trivial commits by adding [skip-review] to your commit message:
# Example: Skip review for automated formatting
git commit -m "chore: run go fmt
[skip-review] - automated code formatting, no logic changes"
# Example: Skip review for dependency updates
git commit -m "chore: update dependencies
[skip-review] - go mod tidy only"
When to use [skip-review]:
- Automated formatting (go fmt, gofmt)
- Dependency updates (go mod tidy)
- Minor documentation typos
- Whitespace or comment-only changes
When NOT to use [skip-review]:
- Any logic changes
- New features
- Bug fixes
- Refactoring
- Test additions/changes
The review will be skipped if any commit in your PR contains [skip-review].
Pull Request Process
-
Create a feature branch
-
Make your changes and commit
-
Push to your fork
-
Create a Pull Request
- Use a clear, descriptive title
- Reference any related issues
- Describe what changed and why
-
Include testing instructions if applicable
-
Address Review Feedback
- Respond to comments
- Make requested changes
- Push updates to your branch
- Request re-review when ready
PR Review Checklist
Before requesting review, ensure:
- [ ]
make checkpasses - [ ] All tests pass with
make test - [ ] New functionality has tests
- [ ] Public APIs have godoc comments
- [ ] Examples added for new features
- [ ] Benchmarks updated with
make bench-save(if changes affect performance) - [ ] No unintended files committed (e.g., binaries, editor files)
- [ ] Commit messages follow conventional format
CI/CD and Automation
Automated Workflows
When you create a PR, several automated workflows run:
- Go Tests - Runs test suite across multiple Go versions
- golangci-lint - Lints code for issues
- Claude Code Review (optional) - AI-powered code review
- Skipped if
[skip-review]in any commit message - Provides feedback on code quality, bugs, performance, security
Workflow Status
Check workflow status: - In your PR - See status checks at the bottom - On the Actions tab - https://github.com/erraggy/oastools/actions
Common CI Issues
Tests fail on CI but pass locally:
- Ensure you're testing with race detection: go test -race
- Check for timing-dependent tests
- Verify all test files are committed
Exit code 143 (SIGTERM):
- This means the test process was killed by the runner
- Common with go test -race on GitHub Actions
- Usually indicates tests hung or timed out
- Current mitigations in place:
- Test timeout: 10 minutes per package
- Job timeout: 15 minutes total
- Limited parallelism: -parallel=4
- GOMAXPROCS=2 to prevent resource exhaustion
- If you see this error intermittently, it's likely a runner resource issue, not your code
- Related: actions/runner-images#6680, actions/runner-images#7146
Linter fails:
- Run make lint locally
- Fix reported issues
- Push fixes
Claude Code Review comments: - Review the feedback (visible in PR comments) - Address legitimate concerns - Respond to questions - Push updates if needed
Key OpenAPI Concepts
Supported OAS Versions
oastools supports all major OpenAPI Specification versions:
- OAS 2.0 (Swagger) - Specification
- OAS 3.0.x (3.0.0 - 3.0.4) - Specification
- OAS 3.1.x (3.1.0 - 3.1.2) - Specification
- OAS 3.2.0 - Specification
All versions use JSON Schema Draft 2020-12 for schema definitions.
Version Evolution
Understanding how OAS evolved helps when working with conversion and validation:
OAS 2.0 → 3.0 Changes:
- host/basePath/schemes → unified servers array
- definitions/parameters/responses → components.*
- consumes + body param → requestBody.content
- produces + schema → responses.*.content
- Added: callbacks, links, cookie parameters
OAS 3.0 → 3.1 Changes:
- Full JSON Schema alignment
- type can be array: ["string", "null"]
- Deprecated nullable field
- Added webhooks for event-driven APIs
- Added license.identifier
Critical Type Handling
Be careful with interface{} fields:
Some OAS 3.1+ fields accept multiple types and are defined as interface{}:
// schema.Type can be string OR []string
if typeStr, ok := schema.Type.(string); ok {
// Single type: "string"
} else if typeArr, ok := schema.Type.([]string); ok {
// Multiple types: ["string", "null"]
}
Always use type assertions - never assume the type!
Version-Specific Features
OAS 2.0 Only:
- allowEmptyValue (removed in 3.0+)
- collectionFormat (replaced by style/explode)
OAS 3.0+ Only:
- requestBody (replaces body parameters)
- callbacks (async operations)
- links (operation relationships)
- Cookie parameters (in: cookie)
- TRACE HTTP method
OAS 3.1+ Only:
- webhooks (event subscriptions)
- Type arrays for nullable
- license.identifier
When working with conversions, these differences matter!
Getting Help
Resources
- Documentation: See CLAUDE.md for technical details
- Release Process: See RELEASES.md for release workflow
- Issues: GitHub Issues
- Pull Requests: GitHub PRs
OpenAPI Specifications
Asking Questions
- Open a GitHub Issue for bugs or feature requests
- Start a GitHub Discussion for questions
- Check existing issues/discussions first - your question may already be answered
Before Filing an Issue
- Search existing issues
- Provide a minimal reproduction case
- Include relevant version information:
- Include error messages and stack traces
- Describe expected vs actual behavior
Common Pitfalls
Type Assertions
Problem: Assuming schema.Type is always a string
// ❌ Wrong - will panic if Type is []string
typeStr := schema.Type.(string)
// ✅ Correct - safe type assertion
if typeStr, ok := schema.Type.(string); ok {
// Handle string case
} else if typeArr, ok := schema.Type.([]string); ok {
// Handle array case
}
Pointer Slices
Problem: Creating value slices instead of pointer slices
// ❌ Wrong - OAS3Document.Servers expects []*parser.Server
servers := []parser.Server{{URL: "http://api.example.com"}}
// ✅ Correct - use pointer slice
servers := []*parser.Server{{URL: "http://api.example.com"}}
Document Mutation
Problem: Modifying source documents unintentionally
// ❌ Wrong - may mutate original
modified := sourceDoc
modified.Info.Title = "New Title" // Changes sourceDoc too!
// ✅ Correct - deep copy first
data, _ := json.Marshal(sourceDoc)
var modified parser.OAS3Document
json.Unmarshal(data, &modified)
modified.Info.Title = "New Title" // Safe
Hardcoded Strings
Problem: Using string literals instead of constants
Missing Tests
Problem: Not testing exported functionality
// ❌ Wrong - exported but no tests
func NewParser() *Parser { ... }
// ✅ Correct - exported with tests
func TestNewParser(t *testing.T) { ... }
License
By contributing to oastools, you agree that your contributions will be licensed under the MIT License.
Happy Contributing! 🎉
We appreciate your time and effort in helping make oastools better. If you have questions or need clarification on anything in this guide, please don't hesitate to ask.