A few weeks ago marked 30 years since yours truly started out as a graduate IT trainee at Royal Insurance (now RSA) in 1988.
Back then, there were no PCs, just dumb terminals; no mobile phones, just landlines; no email, no internet. But the biggest shock of all: other people could smoke at your desk!
Anyway, I thought I’d put together some thoughts looking back over the ‘first 30 years’. So, in no particular order, here goes…
Training, Skills Shortage & Staff Retention
The main reason I accepted the job offer from Royal Insurance was the training programme. I had an unblemished track record of being truly crap at both BBC Basic and Pascal whilst at Leeds Uni. Anyone willing to spend 3 years training me from scratch as an analyst/programmer and pay me at the same time is more than welcome to give it a shot, I reasoned.
The training I received at ‘The Royal’ was, and is, second to none. It was accredited and audited by the British Computer Society (BCS) and was like studying for another degree. I felt like giving up a few times – writing Cobol to run on an IBM mainframe will do that – but I managed to get through it intact. The dropout rate was remarkably low.
The old adage ‘Hire for attitude, train for skills’ is something I strongly believe in to this day. It guides all of our candidate selection at VLDB. For those that won’t make the training investment ‘in case they leave’, I offer Richard Branson’s guidance. Fine words indeed.
Stop moaning about skill shortages, find & train people and treat them well. Simples!
My first role at Royal was in the accounts support team. Right from the off, the importance of best practices and ‘good code’ as a long-term cost reduction strategy really resonated. Little did I know that I was being schooled in ‘technical debt’ avoidance right at the start of my career.
A lengthy stint at Lloyds TSB in the 90’s included several years in the application support team. The Teradata best practice guidelines developed by the application support team over 20 years ago are still in use today, albeit in modified form. Support isn’t just about fixing broken code!
Put good people into application support and pro-actively coach developers and enforce standards to avoid technical debt.
During the latter stages of my time at Royal, I moved to the ‘reporting’ team that developed MI applications on a new-fangled Teradata system. Little did I know we were amongst the earliest adopters in the world, right here in Liverpool.
Having run my first Teradata SQL query 28 years ago, I’m still staggered just how much Teradata got right from the start. It’s a truly remarkable achievement when you realise the fundamentals haven’t had to change in all that time.
In addition to Teradata getting so much right so long ago, there have also been no major technology missteps along the way. I can’t remember anything being consigned to the ‘Oops, what were we thinking?’ pile.
A big ‘well done’ to the original Teradata folks. Very fine work indeed.
Several years ago I was dispatched to a client to help referee a disagreement between two camps: the architecture team and the data modeling team.
The data modelers worked in a vacuum and did ‘data modeling by the book’ in order to implement an industry-specific vanilla data model they’d bought. Academic purity was all that mattered.
This didn’t sit well with the architecture team who pointed out time-to-market and user dissatisfaction was being sacrificed at the altar of puritan data modeling beliefs.
After interviewing dozens of stakeholders, and ruminating on the standoff, I decided the data modelers were causing real pain and suffering. Furthermore, their behavior was largely an unavoidable consequence of having bought a vanilla model. I’ve since seen this play out at several client sites, some of whom really should have known better.
A vanilla data model is not a silver bullet. Be prepared to build a separate, physical, semantic layer…which will probably look a lot like your original home-grown data model.
MPP, Hadoop & Critical Analysis
Teradata’s eponymous MPP database started shipping in 1984.
Google’s MapReduce white paper was published 20 years later in 2004. Big G was subsequently awarded a MapReduce patent which was criticised by De-Witt and Stonebraker for lack of novelty, citing Teradata as prior art. I side with Dave and Mike on this one.
Anyway, inspired by Google’s MapReduce and Google File System (GFS) research, Doug Cutting and the good folks at Yahoo! gave rise to the open source Apache Hadoop framework in 2011. Companies such as MapR, HortonWorks, and Cloudera were formed specifically to monetise Hadoop.
Over the last decade we’ve witnessed the all-too-familiar tech industry chain of events in the Hadoop world: the latest tech silver bullet gets VC backing; sales & marketing money flows; sales reps jump ship to the newest game in town; analysts publish gushing praise (not paid for, no sir!); conferences are held; fanbois gulp down the Kool-Aid; management jump on the FOMO bandwagon and POCs are hastily scheduled.
The Hadoop projects we’ve been involved in have almost all involved moving off Hadoop onto something easier to set up, use and manage. SQL queries that run economically at any scale are the requirement. Hadoop isn’t the answer, no matter how much effort the Hadoop slingers put into SQL-on-Hadoop. We’ve had a scalable, SQL compliant, MPP database for over 30 years, remember?
I stand by the assertion made early last year: for most folks most of the time an MPP database will deliver the business requirement. Hadoop simply isn’t needed.
VCs, analysts, sales reps, conference organisers and fanbois seem to be able to nullify any attempt at critical thinking in the tech using community.
Keep It Simple Stupid (KISS)
The old ‘KISS’ adage is the guiding principle I adhere to most. For the non-believers, I offer the following KISS disciples for you to argue with:
‘Simplicity is the ultimate sophistication’ – Leonardo Da Vinci
‘Everything should be made as simple as possible, but no simpler’ – Albert Einstein
On that note, thanks for your time…I’m off to wrangle some thoroughly modern Python machine learning code. Cobol? No thanks!