Call me a perfectionist, but I often have issues when trying to come up with a good design. Every design I come up with, I find a couple of things I don’t like and that I would like to see improved. As a result I spend a lot of time on trying better designs, even if the impact of it isn’t that big. It can take up so much time of me going back and forth on a design that I just have to tell myself to settle for my current best design. In this blog post I will go deeper about my thinking and feelings when faced with this issue.
One of the biggest missing features of the current Object Saving Framework is that there is no native support for lists. There is a way to bypass the issue, but it’s a nasty one. In this blog post I first go into depth about how to bypass the issue, this will make it clear that native support is essential, and finally I will discuss some issues with supporting it.
Recently I have been reworking some code to instead of accessing the database directly, to use an API through a ‘server’ application instead. The main idea behind this was to have a single source of truth. Getting rid of any syncing problems in case the database would change as well as preventing duplicated code.
However, the amount of data you want to transfer from server to client is a lot less compared to having an application that can access the database directly. Instead you want to have a pagination concept where you request a certain amount of rows and request subsequent results if there are. As soon as you think about something like that the Cursor concept comes to mind. Needless to say, with the power of hindsight, using that was not a good idea.
When reading data from a file multiple times, there are basically two main ways to do so. You can either always reopen the file or you can keep the file open and jump to a specific location. With the option to jump to a specific location inside the file, I was wondering exactly how much faster it is to use this function over re-opening the file every time.
Initially my question was more why would anyone re-open a file when C++ offers a way to reset the file pointer to the beginning of the file? In my mind there is no way that the latter could be any faster, and since both have the same effect why ever bother? But of course I had to put my idea to the test.
I always believed that using a primitive type, or native array would be more performant compared to using a vector. However, the C++ standard book says that you should always use a vector. This sounds odd to me as why would you use a dynamic length variable if you want to have a fixed size list? The big problem with C++ (or actually C) arrays is that it has no built-in length like Java has. However there also exists a standard array class, which seems to fill this gap. So it is time to see how these three different options perform.
Many applications allow for some level of configurations, ranging from the location where the log files should be written to the connection settings of a database to application specific parameters. But with configuration comes an extra level of potential error. What if the configuration (probably file) is not specified, what if there is an error in the configuration, and should you force the user to set all of these configuration values?
All of these are important decisions that need to be made and they can not be neglected. I will discuss a couple of different views and options you have and what are their benefits and drawbacks.
I guess not many people have ever hear of Data Distribution Service (DDS) before. For me this was a completely new technology, but I am open for new things and I was curious to find out what it was all about.
Since at my new job they have decided to use this technology for the core of their products, it is something I will have to learn all about it. DDS is a open standard for communication between different participants and is aimed for IoT. If you are interested, keep on reading, and see what it is all about together with me.