Having extensive application & infrastructure experience in the IT industry across the full software lifecycle most significantly in integration, development and deployment in International environments and most recently DevOps CI/CD in agile UK government.
A cross-skilled hands-on technical all-rounder working seamlessly across multi-disciplined, geographically dispersed customer internal and supplier teams, quickly establishing confidence and trust for implementing complex IT solutions.
Keen to learn new skills and technologies, such as, NodeJS & MongoDB, and as demonstrated through recently attaining AWS Certified Developer Associate, yet blending with the traditional tried and tested.
Results focussed with agility, having a proven track record for delivering high quality solutions that focus on real business & user benefit using strong analytical & creative reasoning to overcome problems as they arise, all the while embracing change for continuous improvement.
More than just a vocation, promotes IT enablement at work and home. Computing is both a profession and a hobby, keeping abreast of latest technology and supporting talent progression at work and more personally as STEMnet Ambassador & Code Club mentor.
23rd October 2019
Extended private (bare metal) kubernetes cluster including:
Contracted to News UK, to design, build and maintain an microservice (node.js) Audio Platform
hosted primarily on Kubernetes (private EC2 clusters with VPC peeering) with complementary AWS services,
viz. S3 with Cloudfront (CDN, Certificates/Route53 and DocumentDB. Build and deployment primarly Circle CI,
with some legacy Jenkins.
This is an agile (fortnightly scrum) project, with all the usual ceremonies including, daily stands up, weekly refinements, planning, showcases and retrospectives.
Primarily a linux/Mac development environment, introduced Windows for development. First challenge was to update the project documents using Microsoft Docker Desktop with Kubernetes enabled, to get the microservices running on Windows, including
Introduced AWS tags on all resources across all environments, to allow for the reporting of resources on a shared AWS account. This was simply a case of updating the terraform configuration across mulitple terraform projects in a single repo in a consistent and easily maintainable manner. Introduced to Circle CI and the gated generate, inspect and deployment terraform plans across dev, staging, UAT and in to production.
Following on from AWS tagging, a quick spike to get the detail on how to "tag" (label) the kubernetes cluster. Updated
kustomize configuration (bases and environments) to use kustomize
expressed using Kubernetes recommended label structure.
kubectl describe to check local deployment and updated jest static tests on generated target environment manifests (dev, uat and prod).
Updated shared Kong hosting environments to use jwt plugin
to provided route based Auth0 JWT authentication against Graph API microservice. Demonstrated operation locally by running Kong (with postgresDB)
docker-compose, deployed using in-house (custom) python scripts with all microservices via kubernetes against Auth0. Demonstrated
both authentication (Auth0 Server Resource/APIs and Application requested "access tokens") and authorisation (Auth0 scopes) within the GraphAPI microservice.
Updated the in-house Java cumcumber integration test scripts to request access token and demonstrate various combinations of no token, expired token and invalid token.
Deployed Kong changes using Jenkins across dev, si, uat, staging and production.
Whilst in-between contracts, rebuilt office KVM/vagrant/ansible "enterprise" network including:
Contracted to Sopra Steria, to design & build a backend API to an Anglular frontend application.
An agile scrum project; fortnightly scrums cycles.
API built on node.js, using Express V4.x. framework, using sequelize V4.x with data stored in postgres V11 database. Both application servers (containers) and database (AWS RDS) hosted on UK Gov PaaS.
Full ownership impacting the required APIs from application requirements (stories & epics), defining and maintaining the API namespace and methods, implementing the endpoints (code and test), documenting the endpoints, supporting the frontend developers on use of the API, supporting DevOps on CI/CD of the API through dev, test, preprod and production.
From the given alpha (demo) code, immediately introduced transaction handling for multiple writes, API HTTP 40x/50x error responses
and an endpoint integration test suite using jest, supertest and faker.Introduced a local proxy
allowing the frontend developers to work autonomously consuming the API already deployed into the controlled dev environment.
When it came time for enduser authentication, introduced JWT for stateless authorisation on each API endpoint, implementing the endpoint authentication and authorisation framework to generate/renew the JWT and validate the JWT using middleware and well crafted API namespace, supporting a colleague with their implementation of brcypt (hashless) authentication logic.
That, with the set of new API endpoints for updating and retrieving rich form data, with the project DevOps and DBA, setup and deployed the beta application release into new Gov PaaS preprod and production hosting environments. Introduced convict for schema based rich environment specific configuration.
Following the beta release, introduced a full audit framework across the backend API using entities and managed properties (prototype and factory patterns from Gang of Four); auditing data stored in the database at both entity and property level.
Extended the environment specific configuration to integrate with AWS Secrets Manager, allowing sensitive information to be managed centrally for all environments (as opposed to command line ENV variables).
With sensitive data now stored locally, using experience gained on previous contract through Semantic Integration, introduced serverless Daily Snapshot reporting for the client's analytic team in Leeds, using serverless framework and AWS lambda with Secrets Manager generating the JWT to access reports API with the generated stored (& managed) securely on AWS S3 with signed linked distributed by email (AWS SES) and Slack (webhooks). The report was scheduled using AWS CloudFront to run the lambda function. These daily snapshot reports, although not tied to the backend deployment, were deployed into each of dev, staging, preprod and production thus being able to demonstrate and gain client approval on the reports through traditional deployment cycles.
Continued to enrich the backend API with new endpoints including extending the automation test framework, following story refinement as features were developed in the frontend. This included taking full responsibility for the database schema, database patching of dev, providing quality DB patch scripts for the DBA to run in staging, preprod and production environments.
Although not within responsibility, took ownership for the beta Data Migration (just 42 users), working through the incumbent Oracle database schema (with no access to the incumbent development team), documenting mapping and writing pgsql (Postgres functions) to quickly and repeatably migrate the data. Stood up a migration application server and worked closely with the client's service support team to achieve the necessary review, approval and sign off of the users' migrated data, critical for securing client sign off for the first true application release. This formed the start of a close working relationship with the client's service team, as demonstration of ability to deliver good quality solutions quickly. Part of the migration solution included updating the backend authentication endpoint to support the incumbent's hash login method allowing users to seamlessly reuse their old application credentials.
Took ownership of the Bulk Upload capability via new API endpoints, which included uploading the files via the frontend application to AWS S3 via S3 signed PUT URLs, mapping of reference data from "bulk upload externalised refernces" to applications internal referrences (online transformation), extensive validation logic and reusing the backend API entity and managed property framework to ensure full auditing. Delivered this complex and crucial functionality only possible through leveraging the direct relationship with the client's service team in Leeds, established during the beta data migration. This involved two visits to Leeds to work onsite with the service team manager to secure client acceptance.
Following initial beta data migration, full migration of 22000 users and their data from incumbent dataset, including the a performance improvement to migrate data concurrently reducing data migration time from 12 hours to 4.5 hours.
Identified a gap in the support of the application, in that reference data is looked away in the database requiring multiple project resources to update it. From my time at photobox, identified
the need for a CMS to maintain such reference data. Also identified that some administrative stories in the backlog were not best served through the frontend application; a more collaborative
approach could be made available.
Shortlisted four CMSs: strapi, KeystoneJS, nodebeats and Aposprophe. Reviewed these CMSs on their ease of installation, customisation, security and integration with the backend API. Secured selection of strapi, because its rich API capability (let down only by it's Administrative Console user security). Built the AWS EC2 server instances (one for dev and one for test) and AWS Hosting Zone from registered domain using terraform automation. Manually installed nodejs and strapi, but utilised a remote MongoDB Atlas database provision. Security was key; acheived pen test acceptance first time.
Used AWS Kinesis, with AWS IAM roles/policies for each of dev, test, proprod and production to pump data from the backend API on every create, update and delete and ingest to a MongoDB Atlas database instance, merging disparate entities into single documents within MongoDB collections, affording the power of MongoDB rich data queries and aggregation pipeline to quickly extract data.
Used pm2 to install and manage as a service, both the strapi application and a complementary custom Slack App API to handle interaction between slack /slash commands and dialogs and the strapi.
Modified the reference data API endpoints to allow "PUT" method to securely update the reference data records from changes to data in
Introduced AWS SNS to notify all new registrations with separate topics for dev, test, preprod and production, which invoked a AWS lamdba function (built and deployed using serverless framework) to lookup data from the MongoDB store, enrich the registration data and then post to Slack (webhook), with buttons to approve/reject, which then securely invoked the relevant backend API to thus approve or reject. All properly secured using AWS IAM roles and policiies.
As part of this demo, also included Slack /slash commands making it easy to search data in the MongoDB database; secured using Slack command signature and Slack signing secret (stored securely in AWS Secrets Manager).
Having recently introduced automation switches to my house, turning on/off of lights required interacting with an Andriod app, and meant having to have phone to hand early in the morning and late at night. I bought an Alexa Echo to allow voice activation on the lights.
But having an Alexa, I then wanted to create my own Alexa Skill. At first, I struggled with the concept of invocation name/utterances, and the ecceltic approach used with the Alexa Developer Console to add validation and dialogs on slots. But after a week of try this, try that, finally got to understand the subtleties of invocations, intents, utterances, slots, slot types, dialogs and validations. Have submitted my skill for certification, but currently held back on my choice of "invocation", on which I have organised a competition with my friends & family.
The backend of the skill is of course AWS lambda; node.js. Created the code framework using 'serverless framework', and using terraform to maintain the necessary IAM role and policies for that lambda. All code runs both locally and remote within the lambda. All local code is unit tested with 100% code coverage.
The lambda uses Axios to interact with TFL's public API to get a list of next bus arrivals for a given 'Stop Point'. Uses AWS Secret Manager to hold my TFL API key details.
The lambda posts notification to Slack Channel, using rich formatting including details of the incoming event (on error/unexpected intent/missing destination). Multiple levels of notification (none, error only, .... through to trace level); controlled with a Lambda env variable making it easy enough to control level of notification once deployed.
In preparation for a new role, I set out learning terraform. Whilst already familiar with vagrant/ansible for office server, I set about using terraform to provision/teardown one of multiple VPCs (based on a given environment of dev, test, acceptance and production) with VPC/subnets chosen from a lookup of predefined definitions.
The VPC includes public and private subnets across one or more Availability Zones.
The VPC includes a bastion virtual server (Amazon AMI) deployed into each public subnet along with the Security Group necessary to allow remote SSH (using nominated key-pair) access to it and from it (the bastion) remote SSH access to other public and private subnet guests. Whilst provisioning the bastion guest, used terraform to create a new policy and IAM role (with assume) to run against the instance.
Following a recent (careless) lost of a KVM Guest, have rebuilt server (2*8 core Xeon with 78MB of RAM) to be fully provisioned, PxE Boot (Raspberry PI) of Ubuntu Server with post-install script to then configure KVM/libvirt. TODO: replace post-install script with 'cloud-init' (native Ubuntu provisionig tech and supported by AWS EC2).
A semi-auto provisioned vagrant/libvirt/ansible "Fedora server" guest (manual creation of the guest but the guest then provisions itself via ansible). This guest is then able to provision all other guests.
A manually provisioned firewall gateway guest (untangle), presented to home network and to each of the KVM host-only networks, with ingress control and gateway port forwarding.
From a CentOS 7 vagrant box, a collection of reusable common Ansible tasks to provision base WOZiTech CentOS specific server instance, which includes default packages (present/absent - lockdown), network reassignment (through the untabgle firewall), firewalld services reassignment and lockdown, optional set of docker prerequisites. Experienced the pain of ansible::yum::latest; an aspect of the way "yum check" works makes using latest extremely slow. TODO: turn these common tasks into a reusable role including storage provisioning via LVM and link up to a Hashicorp Vault (to store SSH public/private keys for default set of users); need to provision the Vault guest.
A wiki.js guest (CentOS 7) using ansible to install dependencies (git2, node.js and MongoDB), manage directories and users (non-system provilege), install the application wiki.js, custom config file and systemd service to manage wiki.js lifecycle using ansible template. Idempotent. TODO: backup users and restore users to Hasicorp Vault when reprovisioning - to allow full recovery of wiki.js provisioning which will include install a MongoDB Change Stream event on users collection.
A proxy guest (CentOS 7), serving as a reverse proxy, using ansible to automate docker installation, with two docker instances: one nginx instance with custom templates to define default and wozitech.asuscomm.com (DDNS) reverse proxy to wikijs and a second Let's Encrypt instance to provide SSL certificate for wozitech.asuscomm.com domain. systemd services to manage each of the containers on start up. Idempotent. Initially tried using jwilder's nginx-proxy docker image, but swapped to the native nginx docker instance, after realising jwilder's proxy is to reverse proxy other docker instances running on the same host, whereas I needed a reverse proxy to another host. TODO: introduce forward proxy docker container (squid).
A Sonatype Nexus3 Repository Manager Guest, to serve a a local repo for all custom Docker images and custom Helm (kubernetes) projects along with a cache of npm (node.js) and yum (CentOS) packages. Used ansible role: ansible-thoteam.nexus3-oss. Overcome a limitation with the role that was failing to identify the latest version, by reviewing and understanding the code (simply had to set the "nexus_version" was I had determined the current latest version).
26th June - advanced analytics using aggregation deep dive
27th June - predominantly advanced development workstream, including stitch (MongoDB equivalent of lambda and DynamoDB Streams)
I volunteer every Monday during term time at St Joseph's RC Infant and Junior school in Norwood, as a Code Club Ambassador.
I saw Code Club advertised in a Linux magazine I was reading on the train whilst returning from a work trip in Wales. I thought, great opportunity to support our next generation of programmers and to inspire others to embrace their passion.STEM, Code Club, Volunteering
Application Development is my passion. This started out as a simple dashboard for my wedding to display photos and messages of congratulations. I wanted to rewrite my previous Dashboard (CGI), using more lightweight components (handlebars replaced with JSViews, and AngularJS replaced with custom lightweight JQuery). Now with NodeJS backend and MongoDB on EC2.
Quickly IMAP/SMTP integration followed. Then SMS integration using web sockets push. Quickly IMAP/SMTP integration followed. Then SMS integration using web sockets push.
Then AWS S3 buckets for storing pictures and videos. Then an S3 website to share photos and videos with JWT authentication and SSL.Raspberry PI, JWT, JQuery, JSViews, NodeJS, Web Sockets, Express, SMTP, IMAP, SMS, MongoDB, web sockets, AWS EC2, AWS Route 53, AWS S3, AWS S3 website with CORS, AWS IAM policy
Back in 2000, my first opportunity with Shell was to lead the development of their first Business to Business (B2B) web application.
Technology aside, at the time, the web was delivering static content, not business critical services. Shell had great foresite and was an exciting place to be working.
And to top this off, this project used "follow the sun" development capability and I got to spend six months in Milan, Italy. What more can you ask for?B2B, Web, Italy, Follow The Sun