An interface is often just an implicit definition as every class defines an Interface. An interface just defines the contract of the class by specifying the methods that can be invoked. Java however also has an explicit interface type which until recently adhered to this idea, but since Java 8 an interface can have a ‘default’ implementation. This idea breaks completely with what an interface is supposed to be and fades the line between an interface and an abstract class. So what is still the point of defining an interface and using default implementations?
We are looking into replacing some operations tool and we have been questioning whether this should be a web application (as it is now) or a native application. For this I did some investigation and made a clear comparison between the advantages and disadvantages of each solution. The application we have could actually be split up in two parts, one for the operations and a more advanced for diagnostics. This was key in my investigation as both serve a different audience.
My analysis is of course focused on this purpose and on the situation in which this application has to run. This may not be applicable for your situation, but I hope that my explanation and arguments are clear enough so you can filter out which ones are relevant for you.
When re-working an existing application I was wondering whether I should continue using Angular Material Design or just switch to Bootstrap. While I have used Bootstrap in the past when working with normal HTML web pages, I never used Material Design. The only experience I had was how the application looked like, and that was horrible. My first reaction was to just start over and use Bootstrap instead of Material Design, but the decision was not that easy as I hoped it would. In this blog post I will compare Material Design and Bootstrap.
Crucial to good Object Orientation, is the separation of concerns and capsulation. This means your objects need a clear API that limits the way the internal state can be changed. The object itself must be responsible for the state and must guard its integrity. While the API defines what operations are allowed, these methods have restrictions of their own. It is essential that the parameters provided to these methods are within the limitations. The only solid way of checking these values is by using preconditions.
Ever since I started working for the new company I have been working on this type of warehouse management system, although we call it ‘material flow control’. Many things have changed since the initial design (which I wasn’t part of), the biggest one being going from a generic to a specific approach. At the moment this decision was made, my team lead also told me to forget about persistency for now and just focus on the functionality.
With all the bare functionalities implemented, it is time to start working on the presistency. This proved to be much harder simply because the design wasn’t made with persistency in mind. So my first priority was to refactor a lot of the code so it can be more easily be stored and recovered from a database. Clearly this could have been avoided and saved a week of effort for the already too tight deadline.
A common feature for servers is the ability to let them shutdown without killing the remaining work. Instead the server will finish the work first before shutting down. While this sounds like a very simple feature and easy to achieve there are a couple of concerns which needs to be addressed:
- New work should be rejected but existing work should be finished.
- What about existing work that triggers a new job?
- What if one of the jobs is hanging?
- Should there exist some intermediate state where the server does not accept new work but keeps running for inspection?
The software we build consists of 4 different types of servers which do not offer this feature. This is a common complaint from our customers who now have to kill active jobs and restart them again later. We are currently busy tackling this problem and incorporate this feature into the software.
When it comes to writing high performance software there are different approaches we can take. These different paths, match with how important performance is considered to be.
The first does not consider performance as anything important, the program is written without any attention towards how it performs. Only after users start complaining about the performance, actions are taken to fix it. Users will only complain if it is a killer for them, it isn’t tolerable and thus should be considered a bug.
Another way could be to write software the same as before, but checking the performance from time to time. When the performance of the software falls below some level, action must be taken. This prevents users from complaining, as the performance is boosted before it becomes critical.
A final way is that performance is taken into account when writing software, and is part of the design. This means constant monitoring of the performance, evaluating the choices that are made and making sure it stays above the required level. This level of performance that is desired is often higher than in the previous case.
But what are the consequences of each of these approaches? Is there a single best way to tackle performance?