Projects

A selection of my favorite projects I’ve created or contributed to.

Cover image for Prioritization in AI Product Development: The Art of Strategic No

Prioritization in AI Product Development: The Art of Strategic No

Building production AI systems requires intense focus. Every new feature, every experiment, every optimization competes for limited resources - engineer time, GPU hours, and cognitive bandwidth. The teams that ship successful products aren’t those that do everything; they’re those that master the discipline of not doing.

The Mathematics of Focus

Consider a typical AI team’s potential workload:

class ProjectLoad:
    def __init__(self):
        self.potential_projects = [
            "Implement transformer architecture",
            "Build real-time inference pipeline", 
            "Create data labeling platform",
            "Optimize model for edge deployment",
            "Develop explainability dashboard",
            "Refactor feature engineering pipeline",
            "Implement A/B testing framework",
            "Build model monitoring system",
            "Create automated retraining pipeline",
            "Develop custom loss functions"
        ]
        
    def calculate_completion_rate(self, projects_attempted):
        capacity = 100  # Team capacity units
        effort_per_project = 30  # Average effort units
        context_switching_cost = 5 * (projects_attempted - 1)
        
        actual_capacity = capacity - context_switching_cost
        completion_rate = min(1.0, actual_capacity / (projects_attempted * effort_per_project))
        
        return {
            'projects_attempted': projects_attempted,
            'completion_rate': completion_rate,
            'projects_completed': int(projects_attempted * completion_rate)
        }

# Results:
# 2 projects: 95% completion = 2 completed
# 5 projects: 60% completion = 3 completed  
# 10 projects: 20% completion = 2 completed

Attempting everything guarantees completing nothing of value.

Cover image for Makefiles for ML Pipelines: Reproducible Builds That Scale

Makefiles for ML Pipelines: Reproducible Builds That Scale

In the era of complex ML pipelines, where data processing, model training, and deployment involve dozens of interdependent steps, Makefiles provide a battle-tested solution for orchestration. While newer tools promise simplicity through abstraction, Makefiles offer transparency, portability, and power that modern AI systems demand.

Why Makefiles Excel in AI/ML Workflows

Modern ML projects involve intricate dependency chains:

  • Raw data → Cleaned data → Features → Training → Evaluation → Deployment
  • Model artifacts depend on specific data versions
  • Experiments must be reproducible across environments
  • Partial re-runs save computational resources

Makefiles handle these challenges elegantly through their fundamental design: declarative dependency management with intelligent rebuild detection.

Cover image for AI's Impact on Software Development: Structural Changes Ahead

AI’s Impact on Software Development: Structural Changes Ahead

Unlike speculative technology shifts that promise revolution without fundamental need—remember the predictions about cities reorganizing around personal transportation devices?—the integration of AI into software development addresses a genuine economic imperative. Organizations face mounting pressure to reduce development costs while increasing software quality and user responsiveness. This creates the conditions for meaningful structural change in how teams operate and how systems are architected.

After evaluating several opportunities in the AI space, I’ve observed consistent patterns in how forward-thinking organizations are restructuring their development practices. The changes aren’t superficial—they represent fundamental rethinking of project dynamics, team composition, and architectural approaches.

Cover image for SwiftMac: A Native macOS Speech Server for Emacspeak

SwiftMac: A Native macOS Speech Server for Emacspeak

Emacspeak turns Emacs into a complete audio desktop for blind and low-vision developers. It needs a speech server to convert text and audio cues into spoken output. SwiftMac implements this server natively in Swift, using macOS speech synthesis APIs directly.

The server receives commands via stdin, manages speech queues, and outputs audio through macOS AVSpeechSynthesizer. Async from the ground up. Fast. Responsive.

The Emacspeak Protocol

Speech servers receive single-letter commands on separate lines:

Cover image for Technology Sprawl in the Age of AI: Human Review is the Bottleneck

Technology Sprawl in the Age of AI: Human Review is the Bottleneck

AI can generate a 50,000-line web application with complete frontend, backend, database schema, and deployment configuration in a day. The bottleneck isn’t writing code anymore. It’s human verification. What can your team actually review and confirm is correct?

Technology sprawl - ten programming languages, twenty frameworks, five databases - maximizes this bottleneck. AI generates code in all of them. Your team can effectively review code in maybe two or three.