Sunday, May 29, 2016

Serverless, NoOps, and Silver Bullets

In the aftermath of serverlessconf, Twitter was abuzz with the #serverless tag and it didn't take long for the usual NoOps nonsense to follow (Charity Major's aptly named "Serverlessness, NoOps and the Tooth Fairy" session notwithstanding) .

When you look at operations as the traditional combination of all activities necessary for the delivery of a product or service to a customer, "serverless" addresses the provisioning of hardware, operating system and, to an extent, middleware.

Even when we ignore the reality that many of the services used on the enterprise will still run in systems that are nowhere close to cloud-readiness and containerization, approaches like Docker will only take you so far.

Once you virtualize and containerize what does make sense, there are still going to be applications running on top of the whole stack. They will still need to be deployed, configured, and managed by dedicated operations teams. I wrote my expanded thoughts on the topic a couple of months ago.

One may argue that a well-written cloud-ready application shoud be able to take remedial action proactively, but those are certainly not the kind of applications showing up on conference stages. Switching from RESTful methods deployed on PaaS to event listeners in AWS Lambda will not make the resulting application self-healing.

Whereas I do appreciate the "cattle-not-pets" philosophy and the disposability in a 12-factor app , I have actually worked as a site realiability engineer for a couple of years and we still needed to monitor and correct situations where we had cattle head dying too frequently, which often caused SLA-busting disruptions to end users expecting 5 9's reliability.

#NoTools, #NoMethod

Leaving the NoOps vs /DevOps bone aside, when I look at event-based programming models such as AWS Lamba and IBM OpenWhisk, and put them in contrast with software development cycles, I start to wonder whether development shops have fully understood the model's overall readiness beyond prototyping.

What is the reality of design, development tooling, unit-testing practices, verification cycles, deployment, troubleshooting, and operations? As an example, when I look at OpenWhisk,  I see NodeJS, Swift and ... wait for it... Docker. There is your server in serverless, unless you are keen on retooling your entire shop around one of those two programming languages.

At the peril of offering anecdotes in lieu of an actual study, some of the discussions on unit testing for event handlers can go from clunky to casually redirecting developers towards functional testing. And that should be the most basic material after debugging, which is also something conspicuously absent.

Progress is progress and the lack of a complete solution should bever be a reason to shy away from innovation, but at the same time we have to be transparent about the challenges and benefits.

If the vision takes a sizable number of tinkerers building skunkworks on the new platforms, that is all good, but we have to realize there is also an equally sizable number of shops out there looking for the next silver bullet. These shops will be quick to blame their failures on the hype rather than on their own lack of understanding of the total cost of development and operations of a cloud-based offering.

Click-baiting of dead development methods is well and alive for a reason, until you realize the big development costs depend more on the Big and Complex stuff than on how much time developers spend tending to pet servers under their desk.

As the serverless drumbeat continues, it remains to be seen whether we will witness an accompanying wave of serious discipline prescribing the entire method before another one is put out as the next big thing.

The obvious next step would be codeless code, which is incidentally the name of one of my favorite blogs. It contains hundreds of impossibly well-written and well-thought out material about software development, including this very appropriate cautionary tale on the perils of moving up the stack the concerns without understanding how the lower layers work.


Friday, January 1, 2016

AspectJ for Java logging: Benchmarking aspect instrumentation - part 8

While preparing to introduce the Java logging Aspect covered in this series to a new project, I wanted to quantify the potential overhead in a formal way since my earlier exercises showed negligible impact but were not documented.

While searching for some sort of already existing benchmarking test, I came across this blog posting, which was not quite a formal benchmark, but had some well-written code that would fit my needs. A big thanks for the folks at Takipi for the effort in there.

The required modifications were simple, including the creation of a new test class matching the existing test hierarchy and then creation of a simple logging aspect to recreate the explicit logging call found in the other tests.

My merge request has not been accepted at the master branch as of this writing, but you can see the updated tree in my own branch of the fork at GitHub: https://github.com/nastacio/the-logging-olympics/tree/nastacio-aspectj.

The results showed no consistent impact, which is to be expected since the AspectJ compiler pretty much inserts the same bytecode you would get from typing the advice in the pointcut yourself.