Taking from the last longer piece and a very good idea that @codePrincess linked to in a blog post: Code that says what it does. Of course, source code also is a medium of communication between human and human â€“ but the person you are writing for may just be the future you. And just as the user interface is an act of communications, so is your source code.
Do you want to impress the reader with your skills in optimizing, but producing hard that is extremely hard to read? Do you want to be considerate and share a lot of your thought process in your code, or is it better to have the reader guessing as to why you chose a particular road to implementation? Try to be nice to somebody who will have to live with your code later on â€“Â it just may be yourself!
Something that I’d truly like to see is smarter icons, both on iOS and the desktop.
I really think the programming world is ripe for an innovation like that.
Even though we tend to completely forget about that, software quite often is a means of communication between humans. I am not referring to the way the product gets used (and a lot of software nowadays is used for direct human to human communications, be that one-to-one or one-to-many) but rather that the entire user interface of the application â€“ is an act of communication between the developer or development team and the user.
The last few years have seen tremendous progress on understanding what factors influence the experience for the user, and what techniques and tools make an application easier and better to use. Computers as tools have certainly evolved and can be more effectively used. But still, the tendency is to see the application as something detached from the people who design and make it. I do not think this position holds true, and we do ourselves a disservice if we, as producers of software, do not look at the entire communications process in depth.
Look at your own experience as a user of software; I’m sure you can come up with examples of software that treat you well: considerate, polite, helpful, playful. But just as well, there’s bad examples I am sure you quickly find: obnoxious, arrogant, dysfunctional apps. Probably, the people who authored the interface and their talent to interact with other humans are not so different from their works.
User Experience designers are using Personae as a tool already; envisioning typical users and how they would go about interacting with the product. If you envision those people already, think about how you would interact with them. Consider yourself as sitting with them in a meeting or on a date, wanting to solve a problem together (and that could be the one that the application you are writing is to solve) or trying to have a good time with them. A good conversation.
Conversations and communications as a paradigm for user interfaces has another interesting implication: that of cultural bias and presumptions. This can be everything from not being able to use language-specific diacritical marks in foreign software products and lack of internalization for date and money fields to a lack of sensitivity in problematic areas (a classical example being country flags used as a selector for language localization). There are certainly other ways that we can look at this, like gender or ethnical presumptions in software â€“Â most stuff we interact with still is designed by white men.
So if we apply concepts that we know about human communications to software products, can we gain new insights or develop even better applications? Hopefully. We certainly have a new tool chest available: a lot of research has been done on interpersonal communications and relationships. We would do well to take that to heart and apply it to our work.
In more projects than I care to think about, I’ve seen a pattern that I dislike the more I see it. It appears quite innocent, but it brings about cost with no discernible benefit.
Including all of the application’s source code on each and every call being made to a page. It may be that one file is included that lists all files the app brings along; it may be a list of require_once statements in every entry page it has. But what is the benefit? Even if the page loads only those files that it might eventually need, it probably includes more than is warranted given the task it is currently given.
One solution is to get into using an autoloader; let the interpreter figure out by itself whether it has already seen all that it needs to execute a given piece of code. The other option is to require external files only at the point that you’re certain that you need them. Does your code do input sanitizing and validation before it loads the classes that then work with the sanitized values? Probably not â€“ it’s much more common to first load all the code, and only then start to work with what you are given.
Lazy loading means that the interpreter only runs on those code parts it needs, helping to contribute to better application performance (because we’re not spending time on code we don’t really need, anyway). This also means using less resources on the server, which means better use of resources â€“ ultimately, having your code use less electric power. But there is even more benefits: code that has not been loaded cannot be causing any kind of interference; you’re certain that you don’t have to look into those files when you’re debugging. And then, code that has not been loaded also cannot be used for security exploitsâ€“so you have less side effects there as well.
It’s not that this is a particularly complex intellectual challenge, it’s more a matter of perspective and maybe writing infrastructure code for your app. But to me, the benefits are worth the few more minutes spent whilst thinking about your code.
I recently had a conversation with a friend who is currently job hunting for a programming job. He is a smart guy who holds a PhD in physics, has spent a good number of years in research and â€“ as part of the life of a modern physicist â€“ had to write code as part of his day job already. But he said something that got me thinking: he said that once you master the basic concepts of programming, you can program in just about any language you’re put in front of.
I strongly disagree. I feel that in many ways, Ludwig Wittgenstein’s â€œThe limits of my language mean the limits of my world.â€ holds even more true for programming languages than it holds for natural languages. Of course, given a good enough introduction to a language and some time, you can write programs in many languagesâ€“provided they’re similar enough to what you know already. But to reach a certain level of mastery in any language, to be able to write idiomatic code, you have to immerse yourself quite deeply and practice quite a bit. To my mind, most people take a sizable number of years to reach that level of expertise â€“ and not recognizing that indicates that this level hasn’t been attained yet.
To be a good programmer, you should be able to know your language inside out. Can you give a list of shortcomings of your language that goes beyond â€œIt sucks?â€ Do you understand where the limits of your language lie, what design decisions were made with what purpose in mind?
And to be an excellent programmer, in how many different languages have you attained that status? In how many different types of languages? If you’re given a specific problem, how do you decide which language is the appropriate tool for the job?