John Allspaw: “On Being A Senior Engineer”

Absolutely one of the best posts on what I consider to be the responsibilities of my work and what I aspire to practice, John Allspaw has written required reading for anyone looking to become a Senior Engineer or anyone who wants to grow into a better one: “On Being A Senior Engineer” (via rc3.org).

Thought Provokers From Python’s Guido Van Rossum and Clojure’s Rich Hickey

At Strange Loop 2011 Clojure’s Rich Hickey gave a presentation (video) on programming and simplicity that rankled some feathers and triggered a heated discussion at reddit.

Duncan McGreggor, decided to contact Python’s Guido Van Rossum to interview him about his keynote talk at PyCon US 2012 (video), specifically his thoughts on callbacks.

Dizzying but invisible depth: on complexity

Jean-Baptiste Queru, on his Google+ profile, posts a poetic and doozy of a post, “Dizzying but invisible depth”:

Today’s computers are so complex that they can only be designed and manufactured with slightly less complex computers. In turn the computers used for the design and manufacture are so complex that they themselves can only be designed and manufactured with slightly less complex computers. You’d have to go through many such loops to get back to a level that could possibly be re-built from scratch.

Once you start to understand how our modern devices work and how they’re created, it’s impossible to not be dizzy about the depth of everything that’s involved, and to not be in awe about the fact that they work at all, when Murphy’s law says that they simply shouldn’t possibly work.

For non-technologists, this is all a black box. That is a great success of technology: all those layers of complexity are entirely hidden and people can use them without even knowing that they exist at all. That is the reason why many people can find computers so frustrating to use: there are so many things that can possibly go wrong that some of them inevitably will, but the complexity goes so deep that it’s impossible for most users to be able to do anything about any error.

Metrics and damn metrics: on systems

Sometimes we get caught in numbers and miss what’s real. This can happen especially when we focus on the wrong numbers writes Arpit Mathur in a sharp post. I’d add even if you were looking at the right numbers, without context, just like a soundbite, you could derive the wrong lessons or encourage behavior that you didn’t desire.

Always ask: What is it we are trying to effect or learn about? Are we looking at the right things? Do we have the context necessary to understand it?

Related:

“Systems Thinking, Lean and Kanban”: “Creating Useful Measures”

Michael Feathers: Rocky Mountan Ruby 2011 Keynote: “Code Blindness” – fantastic – watch the whole thing.

Impermanence and Software Design: on systems

When you’re building software, it is probably best to look at things half-Buddhist. Kent Beck writes about building software that won’t be around longer than him in a recent Facebook note:

…nothing I am doing now with software will remain in a hundred years. Indeed, there was a time not long ago when the only software I had ever written that was still running was JUnit. Thousands of programs started, and my work was in danger of becoming extinct.

I could try to achieve timelessness in my designs and encourage others to do the same, but in the end nothing I program will outlive me. It would be easy to despair over this, to go into my shell and settle for “good enough”. To do so would be to ignore both the immediate impact of my work, used by hundreds of millions of people today (one of the great things about working at Facebook), and the second order effects of my work on the lives and attitudes of others. No, my programs won’t be here in a century, but my work still matters.

Related:

Michael Mehaffy and Nikos Salingaros: “The Pattern Technology of Christopher Alexander”: “We have to remember that software engineers, by nature of their work, have a big problem. Their job is not to solve problems for computers, but for human beings; the computers are only tools in that process.”

Case Statement: “Articulate Coding” – his first post – a good one – keep it up!

InfoQ: Kent Beck: “Responsive Design” 1hr Presentation. Worth it!

On ever growing complexity in programming: on systems

Edsger W. Dijkstra gave a lecture, in 1972, that has since been come to be called “The Humble Programmer”. It’s a short piece that explains why software development, why programming, was growing more, not less complex over time, and some inspiration to be found in the dealing with it. There’s some choice quotes in here that I’m going to include, but read the whole thing.

On LISP:

With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of in a sense our most sophisticated computer applications. LISP has jokingly been described as “the most intelligent way to misuse a computer”. I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.

Lets call it TDD before TDD was coined:

Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmer’s burden. On the contrary: the programmer should let correctness proof and program grow hand in hand.

On decomposing systems:

It has been suggested that there is some kind of law of nature telling us that the amount of intellectual effort needed grows with the square of program length. But, thank goodness, no one has been able to prove this law. And this is because it need not be true. We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad cases is called “abstraction”; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise. Of course I have tried to find a fundamental cause that would prevent our abstraction mechanisms from being sufficiently effective. But no matter how hard I tried, I did not find such a cause. As a result I tend to the assumption —up till now not disproved by experience— that by suitable application of our powers of abstraction, the intellectual effort needed to conceive or to understand a program need not grow more than proportional to program length. But a by-product of these investigations may be of much greater practical significance, and is, in fact, the basis of my fourth argument. The by-product was the identification of a number of patterns of abstraction that play a vital role in the whole process of composing programs. Enough is now known about these patterns of abstraction that you could devote a lecture to about each of them.

On education:

As each serious revolution, it will provoke violent opposition and one can ask oneself where to expect the conservative forces trying to counteract such a development. I don’t expect them primarily in big business, not even in the computer business; I expect them rather in the educational institutions that provide today’s training and in those conservative groups of computer users that think their old programs so important that they don’t think it worth-while to rewrite and improve them. In this connection it is sad to observe that on many a university campus the choice of the central computing facility has too often been determined by the demands of a few established but expensive applications with a disregard of the question how many thousands of “small users” that are willing to write their own programs were going to suffer from this choice. Too often, for instance, high-energy physics seems to have blackmailed the scientific community with the price of its remaining experimental equipment. The easiest answer, of course, is a flat denial of the technical feasibility, but I am afraid that you need pretty strong arguments for that. No reassurance, alas, can be obtained from the remark that the intellectual ceiling of today’s average programmer will prevent the revolution from taking place: with others programming so much more effectively, he is liable to be edged out of the picture anyway.

There may also be political impediments. Even if we know how to educate tomorrow’s professional programmer, it is not certain that the society we are living in will allow us to do so. The first effect of teaching a methodology —rather than disseminating knowledge— is that of enhancing the capacities of the already capable, thus magnifying the difference in intelligence. In a society in which the educational system is used as an instrument for the establishment of a homogenized culture, in which the cream is prevented from rising to the top, the education of competent programmers could be politically impalatable.

On recognizing the difficulty, challenge, and opportunity:

Automatic computers have now been with us for a quarter of a century. They have had a great impact on our society in their capacity of tools, but in that capacity their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind. Hierarchical systems seem to have the property that something considered as an undivided entity on one level, is considered as a composite object on the next lower level of greater detail; as a result the natural grain of space or time that is applicable at each level decreases by an order of magnitude when we shift our attention from one level to the next lower one. We understand walls in terms of bricks, bricks in terms of crystals, crystals in terms of molecules etc. As a result the number of levels that can be distinguished meaningfully in a hierarchical system is kind of proportional to the logarithm of the ratio between the largest and the smallest grain, and therefore, unless this ratio is very large, we cannot expect many levels. In computer programming our basic building block has an associated time grain of less than a microsecond, but our program may take hours of computation time. I do not know of any other technology covering a ratio of 10/10 or more: the computer, by virtue of its fantastic speed, seems to be the first to provide us with an environment where highly hierarchical artefacts are both possible and necessary. This challenge, viz. the confrontation with the programming task, is so unique that this novel experience can teach us a lot about ourselves. It should deepen our understanding of the processes of design and creation, it should give us better control over the task of organizing our thoughts. If it did not do so, to my taste we should not deserve the computer at all!

It has already taught us a few lessons, and the one I have chosen to stress in this talk is the following. We shall do a much better programming job, provided that we approach the task with a full appreciation of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmers.

A Mathematician’s Lament: on education

Paul Lockhart wrote an accessible read on what is wrong with math education and the popular perception of math that is reinforced in culture that has been shared on the Web in quite a few corners. It deserves a wider read: “A Mathematician’s Lament”:

The art of proof has been replaced by a rigid step-by step pattern of uninspired formal deductions. The textbook presents a set of definitions, theorems, and proofs, the teacher copies them onto the blackboard, and the students copy them into their notebooks. They are then asked to mimic them in the exercises. Those that catch on to the pattern quickly are the “good” students.

The result is that the student becomes a passive participant in the creative act. Students are making statements to fit a preexisting proof-pattern, not because they mean them. They are being trained to ape arguments, not to intend them. So not only do they have no idea what their teacher is saying, they have no idea what they themselves are saying.

Even the traditional way in which definitions are presented is a lie. In an effort to create an illusion of “clarity” before embarking on the typical cascade of propositions and theorems, a set of definitions are provided so that statements and their proofs can be made as succinct as possible. On the surface this seems fairly innocuous; why not make some abbreviations so that things can be said more economically? The problem is that definitions matter. They come from aesthetic decisions about what distinctions you as an artist consider important. And they are problem-generated. To make a definition is to highlight and call attention to a feature or structural property. Historically this comes out of working on a problem, not as a prelude to it.

The point is you don’t start with definitions, you start with problems. Nobody ever had an idea of a number being “irrational” until Pythagoras attempted to measure the diagonal of a square and discovered that it could not be represented as a fraction. Definitions make sense when a point is reached in your argument which makes the distinction necessary. To make definitions without motivation is more likely to cause confusion.

Related:

Kevin Devlin: “Lockhart’s Lament – The Sequel”

Slashdot: “A Mathematician’s Lament — an Indictment of US Math Education”

G.H. Hardy:

A mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.

Screaming Architecture: on systems

In “Screaming Architecture” Uncle Bob lays out one of the biggest wins by designing to the problem domain, instead of your weapon (ahem.. framework) of choice:

“If you system architecture is all about the use cases, and if you have kept your frameworks at arms-length. Then you should be able to unit-test all those use cases without any of the frameworks in place. You shouldn’t need the web server running in order to run your tests. You shouldn’t need the database connected in order to run your tests. Your business objects should be plain old objects that have no dependencies on frameworks or databases or other complications. Your use case objects should coordinate your business objects. And all of them together should be testable in-situ, without any of the complications of frameworks.

Anemic Domain Model: on systems

Martin Fowler wrote a piece in 2003 that addresses a subtle anti-pattern – developing your domain model code devoid of behavior. It’s a short, interesting read, that is related to the development of fat controllers in MVCish applications: “AnemicDomainModel”:

“In general, the more behavior you find in the services, the more likely you are to be robbing yourself of the benefits of a domain model. If all your logic is in services, you’ve robbed yourself blind.”