Having extensive application & infrastructure experience in the IT industry across the full software lifecycle most significantly in integration, development and deployment in International environments and most recently DevOps CI/CD in agile UK government.
A cross-skilled hands-on technical all-rounder working seamlessly across multi-disciplined, geographically dispersed customer internal and supplier teams, quickly establishing confidence and trust for implementing complex IT solutions.
Keen to learn new skills and technologies, such as, NodeJS & MongoDB, and as demonstrated through recently attaining AWS Certified Developer Associate, yet blending with the traditional tried and tested.
Results focussed with agility, having a proven track record for delivering high quality solutions that focus on real business & user benefit using strong analytical & creative reasoning to overcome problems as they arise, all the while embracing change for continuous improvement.
More than just a vocation, promotes IT enablement at work and home. Computing is both a profession and a hobby, keeping abreast of latest technology and supporting talent progression.
29th September 2021
Contracted to Digital Detox (Digital Agency), as senior developer, to work with their newest client on a legacy CMS migration to headless CMS, including modernisation (automated) of manual workflows.
Headless CMS vendor already chosen by agency's client (GraphCMS). Initial discovery with the agency's client, to identify current and future processes and capabilities. Following a demo of the existing legacy (in-house undocumented) CMS, with little more than a MySQL database export and the client's target hosting environment (AWS EKS), led the definition and documentation of Technical Solution, working directly with the GraphCMS product team to align with product capability.
Having secured technical solution approval from the client's technical director including the creation of a decision log, led the agile scrum/kanban development of the solution, including crafting all backlog tickets, leading refinement, maintiaining RAIDD register, sprint planning and daily stand up.
Technical lead (to just one other) developer to implement the migration solution. Sole responsibility for building and maintaining
Migration Staging Service and export of migration data from mysql to json using node.js/typescript, including GraphCMS
schema design and versionable maintenance.
Prime responsibility for loading from JSON (streams for large JSON files) using node.js/typescript to GraphCMS via GraphQL mutations, which was futher complicated by the source database being undocumented and much of the known capability hidden in a Java web application no longer supported. Owned all issues that occurred during development, and always took initiative to work through issues with the client team to resolution.
Liased directly with GraphCMS technical team on challenges faced during loading, including concurrency and association limitations.
Having secured acceptance of migrated data, led the extraction via GrpahCMS GraphQL queries (relay) to encrypted XML file per on-demand customer. Using node.js/typescript, oth a CLI and RESTful application, allowing the client to package and deploy internally in many different ways of their choosing.
Contracted to News UK, first onsite then owing to pandemic fully remote, to design, build and maintain Audio Platform a secure GraphQL API for collating and presenting a common set of data for recognisable digital brands including talkSPORT, Times Radio (launch) and Virgin Radio (launch). Development, test, deployment and support, with mentoring (*8) and technical leadership.
A set of microservices (nodejs/typescript/fully automated unit/components via jest)
hosted primarily on Kubernetes with complementary AWS services, viz. S3 with Cloudfront (CDN, Certificates/Route53 and DocumentDB,
fully provisioned via terraform and deployed using Circle CI with some legacy Jenkins. terraform provisioning included maintaining
New Relic monitoring traces and alert conditions and fine grained IAM roles and policies, across multiple AWS accounts. Local (dev) AWS access
via OKTA single sign with multiple assumed roles.
This is an agile (fortnightly scrum) project split across multiple locations, including London and Sofia (Bulgaria), with all the usual ceremonies including, daily stands up, weekly refinements, planning, showcases and retrospectives. With an eye for detail, a key contributor to the backlog. With broad experience and a keeness to collaborate, often praised for thoroughness of spikes. With a delivery focus, invariably complete sprint goals. With engineering discipline, established good practices for repeated high quality releases. Regular pair programming sessions, first when I arrived and then as I became the mentor to new joiners. Provided input on permanent staff performance reviews. By knowing the subject and the audience, made showcases fun. Ready to jump on a whiteboard, even while working at home during 20/21 pandemic.
An "Apple mac" development environment, introduced Windows & Linux for development. First challenge with Windows was to update the project documents using
Docker Desktop on Windows with Kubernetes enabled,
git for Windows bash
Introduced AWS tags on all resources across all environments, to allow for the reporting of
resources on a shared AWS account. This was simply a case of updating the terraform configuration across multiple
terraform projects in a single repo in a consistent in an easily maintainable manner. But followed up with a spike to "tag" (label) kubernetes resources. Updated
kustomize configuration (bases and environments) to use kustomize
expressed using Kubernetes recommended label structure.
Updated shared Kong (API Gateway) hosting environments to use jwt plugin
to provided route based Auth0 JWT authentication against Graph API microservice. Demonstrated operation locally by running Kong (with postgresDB)
docker-compose, deployed using in-house (custom) python scripts with all microservices via kubernetes against Auth0. Demonstrated
both authentication (Auth0 Server Resource/APIs and Application requested "access tokens") and authorisation (Auth0 scopes) within the GraphAPI microservice.
Updated the in-house Java cucumber integration test scripts to request access token and demonstrated various combinations of no token, expired token and invalid token.
Deployed Kong changes using Jenkins across dev, si, uat, staging and production.
Forged trusted relationship with internal Kong team, especially when it came to introducing cluster
rate limiting; downgraded to local.
Implemented GraphQL with
Authorisation header (Auth0 JWT token) edge caching using a NewsUK shared Akamai service. Worked directly
with the Akamai technical consultants who staged the solution in dev. Took ownership to deliver the solution through to production, working
with NewsUK change services. Extended the Akamai solution to honour the server cache directive and updated Graph API microservice (Apollo GraphQL)
for schema declared (and data driven) cache TTL.
Overrode the default Apollo GraphQL "no-cache" implementation to disable cache at start up. Implemented a cache bypass through Akamai and Kong.
Worked again with Akamai technical consultants to implement a stale data workaround. And lately, supporting the Cloud Engineering team to migrate
these custom cache rules to AWS CloudFront; spike resulted in solution including a practical GraphQL schema normalisation solution.
Not just audio platform, the team responsibilities included rotating on to an internal middleware team, working with a custom mobile app team (Android and iOS). Still node.js with jest based unit and component testing but now with integrated PACT contract tests still via Circle CI in AWS EKS. The challenge here was not the tech, but as an internal tool, the absence of quality documentation and access to knowledge champions; you quickly become the expert. Spearheaded changes to transition from engineer rotation to a single platform team working across both Audio Platform and Middleware, able to adapt to the changes in sprint load. This included definition of roles and activities, knowledge transfer (combined stand ups and refinements), mentoring and pairing to build knowledge across the team.
French Estate agent startup, to revolutionise private sales (www.achetemoifrance.com).
Inherited the development of an i18n (English/French) Java spring boot, thymeleaf and PostgreSQL
with flywheel schema management web application. With minimal handover, quickly established a separate dev and test environment
and a product board (trello), with bug and feature backlog. Introduced daily stand up with company’s creative director and
established priority driven set of tasks (agile kanban).
Sourced, recruited and onboarded Java freelancers via upwork (1 in Bulgaria and 2 in Morocco – French speaking). Operating as scrum master, QA and release manager, responsible for preparing and deploying releases in test, showcases and production deployment.
Production environment inherited was AWS ElasticBeanStalk EC2 with RDS and S3 (for both private and public assets).
Test environment provisioned on home server using existing vagrant, ansible and KVM combo, presented via nginx proxy with DDNS and LetsEncrypt TLS.
Established a serverless backend (serverless framework/node.js) to complement the web application services with terraform provisioning of fine grained AWS resources using IAM, including currency import and task maintenance via CloudWatch Rules events and professional email templates (english & french variations) using AWS SES with SNS events from the webapp.
Since Dec 2020, as a result of the continued pandemic, extended backend to provide import and export integration
with apimo (France’s leading estate agent management application), using AWS SNS, Step Functions, lambda and S3 with secure
separation between Internet facing tasks and database tasks, facilitated by AWS VPC Endpoint for S3 and SNS. Combination
of batch async and syncrhonous processing owing to limitations in apimo restful API.
Further extension to backend to provide a public listings API (both JSON and XML) using AWS APIGW via existing serverless framework.
UK Estate agent start up findit360.uk. Note, the website development was transferred to another provider;
the current live site is not the original. View the original here.
Agile development (kanban); responsible for identifying, refining, estimating and prioritising backlog, release preparation and deployment, showcasing, and invoicing.
Mobile first, ReactJS web application with redux. Leading two ReactJS UK freelancers (mentoring, pair programming and code review); testing and merging of their code. ReactJS App and data served via AWS S3 accelerated using AWS CloudFront secured by AWS Certificates with registration/RBAC login via AWS Congito including custom properties. Integrated facebook and Google Analytics. Cross browser/platform testing using browserstack.
API provided by AWS APIGW secured via AWS Cognito JWT verification. Developed and deployed using "serverless framework" lambda (nodejs)
consuming AWS SES for email notifications.
Separate dev, test, acceptance and production environments (AWS Route53), provisioned by terraform and locked down via fine grained AWS IAM roles/policies.
Integrated Ionic framework with Capacitor provisioning iOS and Android apps, including setup and configuration of App Store/Play Store accounts with tester distribution.
Extended private (bare metal) kubernetes cluster including:
Whilst in-between contracts, rebuilt office KVM/vagrant/ansible "enterprise" network including:
Contracted to Sopra Steria, to design & build a backend API to an Anglular frontend application.
An agile scrum project; fortnightly scrums cycles.
API built on node.js, using Express V4.x. framework, using sequelize V4.x with data stored in postgres V11 database. Both application servers (containers) and database (AWS RDS) hosted on UK Gov PaaS.
Full ownership impacting the required APIs from application requirements (stories & epics), defining and maintaining the API namespace and methods, implementing the endpoints (code and test), documenting the endpoints, supporting the frontend developers on use of the API, supporting DevOps on CI/CD of the API through dev, test, preprod and production.
From the given alpha (demo) code, immediately introduced transaction handling for multiple writes, API HTTP 40x/50x error responses
and an endpoint integration test suite using jest, supertest and faker.Introduced a local proxy
allowing the frontend developers to work autonomously consuming the API already deployed into the controlled dev environment.
When it came time for enduser authentication, introduced JWT for stateless authorisation on each API endpoint, implementing the endpoint authentication and authorisation framework to generate/renew the JWT and validate the JWT using middleware and well crafted API namespace, supporting a colleague with their implementation of brcypt (hashless) authentication logic.
That, with the set of new API endpoints for updating and retrieving rich form data, with the project DevOps and DBA, setup and deployed the beta application release into new Gov PaaS preprod and production hosting environments. Introduced convict for schema based rich environment specific configuration.
Following the beta release, introduced a full audit framework across the backend API using entities and managed properties (prototype and factory patterns from Gang of Four); auditing data stored in the database at both entity and property level.
Extended the environment specific configuration to integrate with AWS Secrets Manager, allowing sensitive information to be managed centrally for all environments (as opposed to command line ENV variables).
With sensitive data now stored locally, using experience gained on previous contract through Semantic Integration, introduced serverless Daily Snapshot reporting for the client's analytic team in Leeds, using serverless framework and AWS lambda with Secrets Manager generating the JWT to access reports API with the generated stored (& managed) securely on AWS S3 with signed linked distributed by email (AWS SES) and Slack (webhooks). The report was scheduled using AWS CloudFront to run the lambda function. These daily snapshot reports, although not tied to the backend deployment, were deployed into each of dev, staging, preprod and production thus being able to demonstrate and gain client approval on the reports through traditional deployment cycles.
Continued to enrich the backend API with new endpoints including extending the automation test framework, following story refinement as features were developed in the frontend. This included taking full responsibility for the database schema, database patching of dev, providing quality DB patch scripts for the DBA to run in staging, preprod and production environments.
Although not within responsibility, took ownership for the beta Data Migration (just 42 users), working through the incumbent Oracle database schema (with no access to the incumbent development team), documenting mapping and writing pgsql (Postgres functions) to quickly and repeatably migrate the data. Stood up a migration application server and worked closely with the client's service support team to achieve the necessary review, approval and sign off of the users' migrated data, critical for securing client sign off for the first true application release. This formed the start of a close working relationship with the client's service team, as demonstration of ability to deliver good quality solutions quickly. Part of the migration solution included updating the backend authentication endpoint to support the incumbent's hash login method allowing users to seamlessly reuse their old application credentials.
Took ownership of the Bulk Upload capability via new API endpoints, which included uploading the files via the frontend application to AWS S3 via S3 signed PUT URLs, mapping of reference data from "bulk upload externalised refernces" to applications internal referrences (online transformation), extensive validation logic and reusing the backend API entity and managed property framework to ensure full auditing. Delivered this complex and crucial functionality only possible through leveraging the direct relationship with the client's service team in Leeds, established during the beta data migration. This involved two visits to Leeds to work onsite with the service team manager to secure client acceptance.
Following initial beta data migration, full migration of 22000 users and their data from incumbent dataset, including the a performance improvement to migrate data concurrently reducing data migration time from 12 hours to 4.5 hours.
Identified a gap in the support of the application, in that reference data is looked away in the database requiring multiple project resources to update it. From my time at photobox, identified
the need for a CMS to maintain such reference data. Also identified that some administrative stories in the backlog were not best served through the frontend application; a more collaborative
approach could be made available.
Shortlisted four CMSs: strapi, KeystoneJS, nodebeats and Aposprophe. Reviewed these CMSs on their ease of installation, customisation, security and integration with the backend API. Secured selection of strapi, because its rich API capability (let down only by it's Administrative Console user security). Built the AWS EC2 server instances (one for dev and one for test) and AWS Hosting Zone from registered domain using terraform automation. Manually installed nodejs and strapi, but utilised a remote MongoDB Atlas database provision. Security was key; acheived pen test acceptance first time.
Used AWS Kinesis, with AWS IAM roles/policies for each of dev, test, proprod and production to pump data from the backend API on every create, update and delete and ingest to a MongoDB Atlas database instance, merging disparate entities into single documents within MongoDB collections, affording the power of MongoDB rich data queries and aggregation pipeline to quickly extract data.
Used pm2 to install and manage as a service, both the strapi application and a complementary custom Slack App API to handle interaction between slack /slash commands and dialogs and the strapi.
Modified the reference data API endpoints to allow "PUT" method to securely update the reference data records from changes to data in
Introduced AWS SNS to notify all new registrations with separate topics for dev, test, preprod and production, which invoked a AWS lamdba function (built and deployed using serverless framework) to lookup data from the MongoDB store, enrich the registration data and then post to Slack (webhook), with buttons to approve/reject, which then securely invoked the relevant backend API to thus approve or reject. All properly secured using AWS IAM roles and policiies.
As part of this demo, also included Slack /slash commands making it easy to search data in the MongoDB database; secured using Slack command signature and Slack signing secret (stored securely in AWS Secrets Manager).
Having recently introduced automation switches to my house, turning on/off of lights required interacting with an Andriod app, and meant having to have phone to hand early in the morning and late at night. I bought an Alexa Echo to allow voice activation on the lights.
But having an Alexa, I then wanted to create my own Alexa Skill. At first, I struggled with the concept of invocation name/utterances, and the ecceltic approach used with the Alexa Developer Console to add validation and dialogs on slots. But after a week of try this, try that, finally got to understand the subtleties of invocations, intents, utterances, slots, slot types, dialogs and validations. Have submitted my skill for certification, but currently held back on my choice of "invocation", on which I have organised a competition with my friends & family.
The backend of the skill is of course AWS lambda; node.js. Created the code framework using 'serverless framework', and using terraform to maintain the necessary IAM role and policies for that lambda. All code runs both locally and remote within the lambda. All local code is unit tested with 100% code coverage.
The lambda uses Axios to interact with TFL's public API to get a list of next bus arrivals for a given 'Stop Point'. Uses AWS Secret Manager to hold my TFL API key details.
The lambda posts notification to Slack Channel, using rich formatting including details of the incoming event (on error/unexpected intent/missing destination). Multiple levels of notification (none, error only, .... through to trace level); controlled with a Lambda env variable making it easy enough to control level of notification once deployed.
In preparation for a new role, I set out learning terraform. Whilst already familiar with vagrant/ansible for office server, I set about using terraform to provision/teardown one of multiple VPCs (based on a given environment of dev, test, acceptance and production) with VPC/subnets chosen from a lookup of predefined definitions.
The VPC includes public and private subnets across one or more Availability Zones.
The VPC includes a bastion virtual server (Amazon AMI) deployed into each public subnet along with the Security Group necessary to allow remote SSH (using nominated key-pair) access to it and from it (the bastion) remote SSH access to other public and private subnet guests. Whilst provisioning the bastion guest, used terraform to create a new policy and IAM role (with assume) to run against the instance.
Following a recent (careless) lost of a KVM Guest, have rebuilt server (2*8 core Xeon with 78MB of RAM) to be fully provisioned, PxE Boot (Raspberry PI) of Ubuntu Server with post-install script to then configure KVM/libvirt. TODO: replace post-install script with 'cloud-init' (native Ubuntu provisionig tech and supported by AWS EC2).
A semi-auto provisioned vagrant/libvirt/ansible "Fedora server" guest (manual creation of the guest but the guest then provisions itself via ansible). This guest is then able to provision all other guests.
A manually provisioned firewall gateway guest (untangle), presented to home network and to each of the KVM host-only networks, with ingress control and gateway port forwarding.
From a CentOS 7 vagrant box, a collection of reusable common Ansible tasks to provision base WOZiTech CentOS specific server instance, which includes default packages (present/absent - lockdown), network reassignment (through the untabgle firewall), firewalld services reassignment and lockdown, optional set of docker prerequisites. Experienced the pain of ansible::yum::latest; an aspect of the way "yum check" works makes using latest extremely slow. TODO: turn these common tasks into a reusable role including storage provisioning via LVM and link up to a Hashicorp Vault (to store SSH public/private keys for default set of users); need to provision the Vault guest.
A wiki.js guest (CentOS 7) using ansible to install dependencies (git2, node.js and MongoDB), manage directories and users (non-system provilege), install the application wiki.js, custom config file and systemd service to manage wiki.js lifecycle using ansible template. Idempotent. TODO: backup users and restore users to Hasicorp Vault when reprovisioning - to allow full recovery of wiki.js provisioning which will include install a MongoDB Change Stream event on users collection.
A proxy guest (CentOS 7), serving as a reverse proxy, using ansible to automate docker installation, with two docker instances: one nginx instance with custom templates to define default and wozitech.asuscomm.com (DDNS) reverse proxy to wikijs and a second Let's Encrypt instance to provide SSL certificate for wozitech.asuscomm.com domain. systemd services to manage each of the containers on start up. Idempotent. Initially tried using jwilder's nginx-proxy docker image, but swapped to the native nginx docker instance, after realising jwilder's proxy is to reverse proxy other docker instances running on the same host, whereas I needed a reverse proxy to another host. TODO: introduce forward proxy docker container (squid).
A Sonatype Nexus3 Repository Manager Guest, to serve a a local repo for all custom Docker images and custom Helm (kubernetes) projects along with a cache of npm (node.js) and yum (CentOS) packages. Used ansible role: ansible-thoteam.nexus3-oss. Overcome a limitation with the role that was failing to identify the latest version, by reviewing and understanding the code (simply had to set the "nexus_version" was I had determined the current latest version).
26th June - advanced analytics using aggregation deep dive
27th June - predominantly advanced development workstream, including stitch (MongoDB equivalent of lambda and DynamoDB Streams)