In the previous blog post I explained my preferred way of commit messages, closely related to this is the way of how to actually commit these changes. In the end you want the changes to be available on the main branch, but there are may possible ways to get there. In this blog post I will explain some possibilities and which I prefer.
The easiest way is obviously to just using a single master branch and directly commit everything on there. The downsides of this approach are that you can’t do any peer review before the code is merged and collaborating becomes harder in general. Merging changes will also becomes more cumbersome as the master branch changes a lot and often, meaning you constantly have to keep merging the changes into your local branch. Having a separate branch for your changes is the solution to all of these problems.
Whenever you use a versioning system such as Git, you have many possible ways of organising everything. The Goal of such a versioning system is to keep track of changes, but how easy it is to find a specific change mostly depends on how you use it. That is why in this blog post I will discuss the way I like to write commit messages to guarantee a clear, readable and easy to track history.
The usage of Artificial Intelligence is growing in every possible area. Though I have no real experience with AI, I am very interested in learning more about it and using it. AI is however a very general term that covers many different techniques ranging from heuristics such as Particle Swarm Optimisation to Machine Learning and Neural Networks. There is however a big difference between these algorithms, and what I want to discuss here is the ability to understand the outcome of the algorithm.
Crucial to good Object Orientation, is the separation of concerns and capsulation. This means your objects need a clear API that limits the way the internal state can be changed. The object itself must be responsible for the state and must guard its integrity. While the API defines what operations are allowed, these methods have restrictions of their own. It is essential that the parameters provided to these methods are within the limitations. The only solid way of checking these values is by using preconditions.
University is the place where you are taught essential skills required for your future career. Although it is less then 3 years ago that I graduated University, I do feel I have evolved and learned a lot since then. Even though at University I already felt I was a good developer and had a strong mind, working on some real life software does learn you a lot. I would like to look back on some things I have learned, how my view of things has changed and what this is all caused by.
A Map is a very common data structure to use, and often you find yourself wanting to traverse over all elements in it. For this we have multiple methods:
If you are interested in both the keys and the values, the valueSet does not fit your needs as it is impossible to get the key based on the value. But which one of the remaining two ways performs the best? I did the test and found some interesting results.
We all continuously use protocols and most of the times this is fine and we don’t really care about it. The protocol however determines a lot more than just which data you have to send, and in this blog post I will go into detail on the requirements to have lossless communication on the application level. There is a big difference between lossless communication on the lowest layer, such as TCP and on the application layer.
Lossless communication can not be achieved with any protocol, the protocol must be designed for it in such a way that it can handle all types of failures:
- Network failure
- Client crash
- Server crash