Crucial to good Object Orientation, is the separation of concerns and capsulation. This means your objects need a clear API that limits the way the internal state can be changed. The object itself must be responsible for the state and must guard its integrity. While the API defines what operations are allowed, these methods have restrictions of their own. It is essential that the parameters provided to these methods are within the limitations. The only solid way of checking these values is by using preconditions.
Ever since I started working for the new company I have been working on this type of warehouse management system, although we call it ‘material flow control’. Many things have changed since the initial design (which I wasn’t part of), the biggest one being going from a generic to a specific approach. At the moment this decision was made, my team lead also told me to forget about persistency for now and just focus on the functionality.
With all the bare functionalities implemented, it is time to start working on the presistency. This proved to be much harder simply because the design wasn’t made with persistency in mind. So my first priority was to refactor a lot of the code so it can be more easily be stored and recovered from a database. Clearly this could have been avoided and saved a week of effort for the already too tight deadline.
A common feature for servers is the ability to let them shutdown without killing the remaining work. Instead the server will finish the work first before shutting down. While this sounds like a very simple feature and easy to achieve there are a couple of concerns which needs to be addressed:
- New work should be rejected but existing work should be finished.
- What about existing work that triggers a new job?
- What if one of the jobs is hanging?
- Should there exist some intermediate state where the server does not accept new work but keeps running for inspection?
The software we build consists of 4 different types of servers which do not offer this feature. This is a common complaint from our customers who now have to kill active jobs and restart them again later. We are currently busy tackling this problem and incorporate this feature into the software.
When it comes to writing high performance software there are different approaches we can take. These different paths, match with how important performance is considered to be.
The first does not consider performance as anything important, the program is written without any attention towards how it performs. Only after users start complaining about the performance, actions are taken to fix it. Users will only complain if it is a killer for them, it isn’t tolerable and thus should be considered a bug.
Another way could be to write software the same as before, but checking the performance from time to time. When the performance of the software falls below some level, action must be taken. This prevents users from complaining, as the performance is boosted before it becomes critical.
A final way is that performance is taken into account when writing software, and is part of the design. This means constant monitoring of the performance, evaluating the choices that are made and making sure it stays above the required level. This level of performance that is desired is often higher than in the previous case.
But what are the consequences of each of these approaches? Is there a single best way to tackle performance?
With the rise of cloud computing and the on-demand available capacity, I sometimes hear people say that software automatically scales, all you have to do it place it in a cloud environment such as Amazon. The reason why people tend to believe this, comes from all the propaganda made by such companies. They tell you tales of how your system adapts the the load automatically, and use cases of companies which had a drastic increase in load overnight, and how well it worked with the cloud environment.
While it is true that in a cloud environment it is pretty easy to start multiple ‘instances’ that run the software, this does by no means has anything to do with true scalability. Just considered we have a common desktop application, or even something that normally is installed on a server on-premises. We now decide to abandon the costly infrastructure required to run this software, and move it to a cloud environment as is.
Over the course of time, systems have become more complex to meet the user his requirements. At the same time, humans have not evolved that much. This leads to two challenges:
- Making a complex system easy to use for the user.
- Writing and maintaining such a complex system.
A nice example of this has been a hot topic for some time now: ‘Big Data’. As we kept on storing more and more information, it becomes harder and harder for humans to extract relevant or interesting information out of it. Showing all this information is something we don’t want as it is far more then human capacity. Instead we want to show only relevant and interesting information to such that a user can act upon it.
This sounds easy, where it not that different users want different things, and they consider different information to be relevant and interesting. Trying to satisfies everyone leads to an abundance of information shown, which is something we wanted to prevent. The only currently existing solution for this is building a highly configurable system to allow the user to specify what he wants to see.
This leads to an even more complex system, making it a lot harder for developer and maintainers. But this does not have to be the case. We have come a big way with building such systems, and learned a lot of things already.
As a software developer I like to come up with algorithms and structures that enable me to write software that pushes things to its limits. During my studies at the University of Antwerp, I learned a lot about the complexity about algorithms, and I am really thrilled to come up with more advanced techniques each day.
With the increased capacity of hardware, the skill of writing software as optimal as possible, both time and memory wise, is being less appreciated. In that aspect I may have been born too late, as nowadays a lot of software is written without any consideration of the size and consumption of it, as the hardware can handle it anyway.
But when I think of it, better hardware should not allow us to write less optimal software, it should allow us to write software that does more.