Some time ago I experienced my first time (except from once at university) where I had to implement part of the specification of a protocol. In this case it was the Modbus/TCP protocol that was partly required. For this implementation some pretty low level programming was involved. The fact is, that in our software we just use a library for the Modbus protocol, but to keep our tests light-weight and to verify the correctness of this library as well I did not want to use this library in my unit tests. I remember during university to really hate the whole messing around with particular bits at specific locations, as I just wanted something more high level that would do all the work for me, but that has changed apparently.
With the evolution of internet almost everything is accessible from your computer wherever in the world. When however dealing with applications that are installed on a customer’s server this is often different, but even when the server is accessible integration work and tests needs to be done. Doing this type of work is a lot easier when you have direct communication with the customer, besides that a customer appreciates seeing someone working on it.
How much time you can expect to go on site depends on a couple of factors which I will discuss in this post. I will also reveal some of my opinions and experiences of going on site.
When working with different branches it is easy to do pull requests before merging changes into the master branch. The most common usage of this is to be able to do code review prior to the merge. There is however a lot more possible to assist your daily development process and achieve higher quality software.
In general you should do as much as you can before merging because up until that point it is straightforward as to what caused any problem, as you are only dealing with those changes. Another approach that needs to be taken is to automate as much as possible.
While the list of things you could do on a pull request is infinite, I explain the most common to do and what the benefits are.
Immutable objects are a great concept, an object that can not be altered after it was created. It is easy to debug and understand and it gives a lot of certainty just because it will never change. It is clear that we should aim to make objects immutable as much as possible but this sounds a lot easier than it actually is. Moreover the usability of immutable objects can pose a lot of problems as well.
In the previous blog post I explained my preferred way of commit messages, closely related to this is the way of how to actually commit these changes. In the end you want the changes to be available on the main branch, but there are may possible ways to get there. In this blog post I will explain some possibilities and which I prefer.
The easiest way is obviously to just using a single master branch and directly commit everything on there. The downsides of this approach are that you can’t do any peer review before the code is merged and collaborating becomes harder in general. Merging changes will also becomes more cumbersome as the master branch changes a lot and often, meaning you constantly have to keep merging the changes into your local branch. Having a separate branch for your changes is the solution to all of these problems.
Whenever you use a versioning system such as Git, you have many possible ways of organising everything. The Goal of such a versioning system is to keep track of changes, but how easy it is to find a specific change mostly depends on how you use it. That is why in this blog post I will discuss the way I like to write commit messages to guarantee a clear, readable and easy to track history.
The usage of Artificial Intelligence is growing in every possible area. Though I have no real experience with AI, I am very interested in learning more about it and using it. AI is however a very general term that covers many different techniques ranging from heuristics such as Particle Swarm Optimisation to Machine Learning and Neural Networks. There is however a big difference between these algorithms, and what I want to discuss here is the ability to understand the outcome of the algorithm.