Resume
Objective
Experienced technologist seeking a senior+ software engineering position.
Technologies
Programming Languages: Go (Golang), Rust, PERL, PHP, Python, Javascript
Operating Systems: Linux (Fedora, RHEL, CentOS, Ubuntu, Debian), MacOS
Configuration Management/Infrastructure as Code: Puppet, Terraform, Salt, Ansible
Data Stores: MySQL, Postgres, Redis, ElasticSearch, Memcached, ScyllaDB
Virtualization: KVM, VMWare
Messaging: Kafka, RedPanda, RabbitMQ, NSQ, AWS SQS, AWS SNS, Google PubSub
Cloud: AWS, Azure, GCP, DigitalOcean, OpenStack, CloudStack
Containers/Container Orchestration: Docker, Nomad, ECS, Kubernetes (k8s)
Monitoring/Visualization: Nagios, Zabbix, Cacti, Graphite, Kibana, Grafana, Prometheus
Shell: Bash
Secrets Management: Vault, AWS KMS
Text Editor: Vim, VS Code
Source Control: Git, SVN, CVS
Work History
2020-04 - Present: Systems Architect, Netskope
- Built a service running at our edge for ingesting large volume RUM metrics.
- Built a service for managing the failover of Keepalived based VIPs from machine to machine regardless of the network implementation (OpenStack, AWS, GCP, AliCloud).
- Worked on the infrastructure supporting a global deployment of HashiCorp’s Vault Enterprise hosted in GCP. Improved the processes for managing Vault configuration (policies, roles, mounts, etc.) by migrating them into Terraform. Refactored infrastructure deployment to support in-place rolling upgrades to Vault and Consul clusters. Worked closely with the monitoring team to ensure services and backups were being correctly monitored. Served as a Vault SME for helping to onboard new services to use Vault. Built tooling for testing Vault policies and multiple policy interaction to ensure policies worked as intended and to enable rapid iteration of policy development.
- Built tooling to deploy SSL certificates from Vault to all edge services that needed them in less than 10 minutes. Hardened certificate delivery mechanism to FedRAMP specifications.
- Automated the deployment and management of bare-metal Kubernetes clusters in a GitOps style workflow leveraging Github Actions.
- Built tooling for improving the developer experience when working with our fleet of k8s clusters.
2016-05 - 2020-04: Principal Engineer, Graymeta, Inc.
- Lead a team that optimized the critical path of our data ingestion pipeline leading to a 125x performance improvement.
- Managed the migration to Go modules. Stood up an internal module proxy to integrate with our private modules.
- Developed internal standardized tooling and libraries for things like structured logging, metrics colletction, etc.
- Deployed AWS Config for compliance reporting. Integrated with monitoring systems for alerts.
- Built a SaaS version of our core platform integrated with K8S. Integrated Stripe as the payment processor.
- Designed and implemented the backend system for ingesting the 2018 Royal Wedding video stream, processing the feed through human moderated facial recognition (powered by AWS Rekognition) before being distributed by CloudFront to an audience of 1M+ worldwide live (2 minute delayed feed) users. This was known as the Sky News Who’s Who app. Flew to London to provide boots on the ground support for the day of the event.
- Authored Terraform modules for infrastructure management. Authored and published a module to the Terraform registry for enterprise customers to deploy our product inside their AWS environment.
- Built a new CI pipeline. Testing and deployment automation with Docker, Jenkins, and Puppet
- Managed deployments across various IaaS providers: AWS, Azure, SoftLayer, Oracle Cloud
- Built CloudFormation templates that included RDS, ElasticSearch, Application Load Balancers, Autoscaling Groups, CloudWatch logs, ECS clusters, EFS filesystems, etc. to automate deployment of our application via the AWS Marketplace.
- Designed, rebuilt, refactored, and debugged our product’s Golang codebase to add features, operationalize existing code, and refactor/improve the codebase.
- Refactored our back end batch processing system to so it could run inside containers under multiple container orchestrators on multi-tenant container clusters leading to an overall reduction in spend on infrastructure.
2015-11 - 2016-05: Senior Systems Administrator, Raytheon at NASA Jet Propulsion Laboratory
- Linux systems administration in a 1400+ node, heterogeneous environment.
2013-10 - 2015-11: Software Engineer, Limelight Networks
- Refactored internal virtualization management control plane.
- Modernized and maintained asset management application (forked and heavily customized version of Rackmonkey). Built a test suite, then refactored backend code using test suite to ensure backwards compatability.
- Helped to monitor a large scale network with thousands of nodes spread across 50+ data centers around the world. Completed migration off of a 2001-era monitoring system.
- Maintained and refactored legacy code/systems
- Developed new self-service portals for infrastructure management
- 100% remote employee
2012-04 - 2013-09: Systems Administrator, Blackstar Marketing, LLC.
- Deployed Puppet, Puppet Dashboard, and PuppetDB for configuration management to existing infrastructure.
- Deployed mcollective for orchestration. Wrote custom mcollective plugins to deal specifically with our needs.
- Deployed LDAP for centralized authentication.
- Deployed centralized logging via rsyslog and LogStash with Kibana as the web frontend
- Abstracted existing domain purchasing code. This allowed us to purchase domains from any registrar that exposes a domain purchasing API.
- Redesigned and redeployed the Nagios implementation. Nodes automatically added/removed themselves via Puppet.
- Deployed a Jenkins environment for continuous integration/builds. All builds output rpm packages as artifacts.
- Deployed Graphite + Tasseo + Bucky + Collectd for real time metrics collection and dashboards.
2011-06 - 2012-04: Sr. Systems Administrator, The Walt Disney Studios
- Deployed Puppet for configuration management to existing infrastructure.
- Helped to design and deploy a CloudStack and VMWare based private cloud.
- Designed and built a cloud based, auto-scaling, video transcoding application wrapped around ffmpeg to demonstrate how to leverage the capabilities of our private cloud.
2010-09 - 2011-06: Enterprise Systems Engineer, Specific Media, Inc.
- All aspects of 24⁄7 live site operations
- Managed Puppet infrastructure through major version upgrades. Refactored Puppet code to leverage new features.
- Tied together disparate pieces to build a fully automated bare-metal to production server build system using RequestTracker(RT) with the AssetTracker plugin as the system of record. DNS, DHCP, kickstart, and Puppet tied into the system.
- Refactored and automated internal DNS infrastructure
2009-11 - 2010-08: Operations Engineer, Hangout Industries, Inc.
- All aspects of 24⁄7 live site operations
- Deployed Puppet for automated management of machine configurations.
- Refactored existing infrastructure; refactored all application configurations; added additional hardware into a mixed Linux/Windows environment.
- Performed configuration and testing of in-house .NET applications on a Linux/Apache/mod_mono stack to allow us to migrate existing applications from Windows servers to Linux servers resulting in a cost reduction and stability increase.
- Load tested infrastructure using a combination of off the shelf solutions(Apache bench, jMeter, MySQL bench, etc.) and homemade applications. Compared different hardware, OS, and application configurations to meet anticipated demand requirements. Helped developers analyze slow query logs and refactor existing database structure for improved performance.
- Built a data warehousing system to load raw application log files into a MySQL database for our analytics team for in depth analysis. Developed a suite of business intelligence reports that ran on a nightly basis to feed information to management.
2006-12 - 2009-11: Software Build Engineer/Associate Site Operations Engineer, Disney Interactive Media Group
- Primary operations team contact for Pirates of the Caribbean Online(POTCO), an MMORPG.
- Build and release manager for all POTCO game services. Game client was built on Win32 and OSX platforms. Game server was built on RHEL platform.
- Retooled the build process for POTCO to make it easier to understand, manipulate, and train new publishers. This new process was then back-ported to Toontown, another MMORPG developed by our studio.
- Migrated the build process for POTCO into a continuous integration server.
- Authored the Windows client installers for POTCO and Toontown
- Day to day operations of the game infrastructure including server configuration, maintenance, monitoring, and backup of application log files and databases.
- Developed custom Sitescope and Cacti monitoring utilities.
- Developed a standardized suite of log collection and archival tools designed to be used cross-product, cross-deployment.
- Responsible for diving into hundreds of gigabytes of log files to study player in-game behavior and provide feedback and data to developers. Condensed the log files into a format that was easier to manage and provided this to the developers for their own manipulation.
- Developed automated daily reports based on parsing application log files and querying a bug reporting database to indicate overall product stability. This information helped the product developers to prioritize daily activities.
- Developed tools to aid our technical services/customer support teams to mine data when issues about individual customers came up.
2004-12 - 2006-11: Sort Product Development Engineer, Flash Products Group, Intel
- Developed C code to test the functionality of flash memory chips during wafer level sort.
- High volume data analysis and interpretation skills were employed to tune defect screens and monitor process variations to ensure that quality devices are delivered to the customer.
- Built and maintained data analysis tools, specifically SSAT (Sort Spatial Analysis Tool). This tool automated much of the day to day manual data extraction process in addition to providing a tool to spatially analyze data for wafer level trends and defects.
- Developed a set of Perl modules to do spatial analysis within an individual flash memory device based on the block architecture of the specific product. Helped to integrate these modules into a larger data analysis tool (TableTool).
- Repurposed an aging server to run large batch data analysis jobs. Built a web frontend for job submission and tracking.
Certifications
- 2017 AWS Certified Solutions Architect - Associate
- 2013 Puppet Certified Professional
Education
- 2000-2004: University of California, Davis, Bachelor’s of Science, Computer Engineering
Public Speaking
- AWS: This is My Architecture - Real Time Celebrity Identification at the Royal Wedding Using Amazon Rekognition Video
- Hashiconf 2017 - “Backend Batch Processing With Nomad” Slides / Video
- PuppetCamp Los Angeles 2012 - “Integrating Cloudstack with Puppet” Slides / Video
- Cloudstack Collaboration Conference 2012 - “Running Puppet on CloudStack Instances” Slides / Video