Clean and maintainable Git Histories – Part 2

3 simple tricks for managing git histories

It took me a while to learn these simple tricks for effectively managing Git histories with just a few commands. In the first article, I explained why clear, structured Git commit messages and cohesiv commits are important or why temporary commits make the history more difficult to read. This time, I will show you how to organise your commits before merging them. This will give you a clean, easy-to-understand commit history.

Changes should be reviewed and consolidated into a clear commit history. The aim is to include only changes that are relevant, just like book authors revise drafts before publishing. Consider cleaning up commit history for clarity and maintainability:

  • Add changes directly to the previous commit
    If you make minor adjustments to the last commit, you can add them directly into the previous commit by utilizing git commit --amend
  • Consolidate multiple commits into one single commit
    All changes can be grouped together into a single commit with
    git merge --squash
  • Tidy up your commits
    Remove temporary commits, combine multiple commits into one or change the content to create a simple, easy-to-read history with git rebase

Add changes directly to the previous commit

You can make minor corrections or unnecessary changes directly in the last commit using git commit --amend. This command lets us add new files, remove unwanted files or change the commit message – all without creating a new commit (simplified). Here’s how to make changes:

  • Open your files in the working directory, and then either add new files using
    git add or remove files that you don’t need any more using git rm
  • Now you can adjust a commit by adding the changes to the last one using the
    git commit --amend command.
  • You can also edit the commit message if you want.
  • And when the commit is already been pushed, you’ll need to force push to the remote repository with git push --force-with-lease.

Need more tips? Here we go:

Undo unnecessary commits

You can undo unnecessary commits with git reset, as long as they haven’t been pushed yet.

Undo changes that have been alread pushed

If you’ve already pushed some changes, you can undo them using git revert <commit-hash>. But keep in mind that this will create a new commit.

Consolidate multiple commits into one single commit

During development, you usually work in your feature branch and create several commits like partial steps, WIP commits or bug fixes. Often these intermediate states are not relevant for the main branch and inflate the history. In such a case, the changes can be reduced to one single, clean commit.

Before merging into the main branch, you can squash your commits so that only one summary commit appears in the main branch. The detailed intermediate steps in the feature branch are not retained – instead, a clear, complete commit is created that describes the entire change. Here is an example:

git checkout main
git merge --squash <feature-branch>
git commit -m "feat(auth): add JWT authentication"
git push origin main

This method makes sure you’ve got a tidy and clear commit history by presenting the whole change as one completed commit – perfect for feature developments with lots of intermediate stages. This makes code reviews a lot of easier. And the main branch is kept simple and free of any unnecessary steps.

Disadvantages like losing granularity because you lose contextual information, or more complex conflict resolution because squash bundles multiple changes into one large commit, are often not that important.

Tidy up your commits

You can use git rebase -i <commit-hash> to edit the commit history interactively and to tidy up the commits. This is really useful for getting rid of temporary or debug commits, or for merging multiple commits. Let’s say we want to edit the last five commits with git rebase -i HEAD~5. This will show you a list of the last five commits:

pick abcdef1 Commit A
pick abcdef2 Commit B
pick abcdef3 Commit C
pick abcdef4 Commit D
pich abcdef5 Commit E

Now you can edit the commits as you like:

  • Change pick to squash for the commits you want to merge.
  • Use fixup, if you want to stick changes or corrections made in later commits into an earlier one, but you don’t want to keep the message from the later one.
  • You can use the drop command to get rid of individual commits from the history. This is great for temporary or debug commits.
  • And use edit to make changes to a commit. Git stops at this point in the rebase process, so you can make changes in the working directory, like remove debug output or undo temporary changes.

Then just save those changes and you’re all set! Now that you’ve changed the commit history, I think it’s important to mention that you’ll need to perform a forced push to update the changes in the remote repository using git push --force-with-lease.

A rebase cleans up the Git history. It makes the history linear, as if all the changes from your feature branch were applied directly to the latest version of the main branch. And there’s no merge commit created that is causing any issues.

Just be careful:

  • If you re-write history, you might lose sight of the original context, especially when and how changes upstream were integrated.
  • If you rebase branches that have already been published, you might run into problems because you’ll be changing the history. Other team members working on the same branch will run into conflicts and have to readjust their local changes.

Conclusion

In this two-part series of articles, we’ve taken a close look at keeping Git histories easy to understand and to maintain. This is something that often gets overlooked, but it’s actually really important making sure the team works efficiently.

The second part is all about specific ways to tidy up the commit history before merging into the main branch. We need to go the changes one more time to get them into a clear, consolidated commit history. The idea is to include only changes that are relevant for production.

Links

Rewriting history https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History

Photo by Tobias Reich on Unsplash

Clean and maintainable Git Histories – Part 1

3 methods for keeping histories clean and organized

In this series of articles, I’m going to show you how to get clean and maintainable git histories. In the first part, we look at different methods. If these methods are followed, the history remains clear, facilitates code reviews, debugging and can serve as documentation:

  1. Meaningful commit messages
    A clear description of the change and its motivation
  2. Cohesive commits
    Only changes that belong together should be included in a single commit
  3. Avoid temporary commits
    WIP, temporary or debug commits should not be merged to keep the history clean

Meaningful commit messages

Everythings starts with a clear commit message. It should briefly summarize what was changed and why. Conventional Commits offer us a reliable format for commit messages, enhancing the clarity and comprehensibility of source code histories. A commit message should include:

  • Type: The type of change (e.g., fix, feat).
  • Scope (optional): The affected area of the project.
  • Description: A brief description of the change.
  • Body (optional): Additional details about the change.
  • Footer (optional): Metadata, e.g., references to issues.

Here is an example of such a commit message:

feat(auth): add JWT authentication

This commit introduces JSON Web Token (JWT) based authentication to the login module.

BREAKING CHANGE: The login API now requires a token for authentication.

Cohesive commits

A commit should be atomic, containing logically related changes that should not be split up. Each commit should represent a standalone change, whether it’s a bug fix, a feature enhancement, or a code refactoring. This practice simplifies change tracking and finding errors more quickly. It provides a clear overview of the modifications made and the reasons behind them, also streamlining the code review process. By isolating each change, reviewers can concentrate on specific modifications without feeling overwhelmed by a large number of changes. Moreover, in case of issues, reverting changes becomes significantly more straightforward, minimizing disruptions to the overall project.

Let’s take the previous example and explain the commits in more detail:

Commit 1: feat(auth): setup JWT library and utility class
Commit 2: feat(auth): configure security with JWT filter
Commit 3: feat(auth): modify login API to return JWT token
Commit 4: test(auth): add unit tests for JWT authentication
Commit 5: docs(auth): update README with authentication instructions
  • Commit 1 adds the JWT library and creating the utility class
  • Commit 2 includes a JWT Filter into the security configuration
  • Commit 3 focuses on the specific change to the API
  • Commit 4 is dedicated to tests to ensure that the new functionality is fully covered
  • Commit 5 contains only documentation changes

Aim to create real cohesive commits, not just small, chronological commits. The purpose of commits is to explain what changed and why, not to document every step. This leads to a cleaner Git history, helps to improve code quality and maintainability.

Avoid temporary commits

While in the process of development and debugging, it’s important to be mindful of the impact of WIP (Work in Progress) and debug commits on the clarity of a code repository. These types of commits can introduce steps that do not reflect the final implementation, leading to complications in tracking progress on features or bug fixes.

Take, for example, a common debug commit where a developer adds println statements to monitor variable behavior during code execution. It is essential to clean up these debugging statements once the debugging phase is complete.

Below are some commit samples that are not suitable for merging:

Commit 1: feat(user): first version UserService
Commit 2: refactor(user): UserService - Tests not working
Commit 3: debug(auth): added printlns to check flow
  • Commit 1  Incomplete, unclear state – do not merge
  • Commit 2  Must be cleaned up or squashed before merging
  • Commit 3 Helpful for debugging – remove

To maintain a clear and detailed project history, it is advisable to rewrite the commit history for consolidating work-in-progress commits, eliminating temporary and debug commits prior to merging, ensuring that only important and production-ready updates are integrated into the main branch.

Conclusion

These 3 methods help to structure and maintain project histories. This includes clearly defined commit units, meaningful commit messages and that only production-relevant changes are merged into your primary branch. Just like book authors revise drafts before publishing, consider cleaning up commit history before sharing it.

So stay tuned for the second part, where we delve into techniques for cleaning up and consolidating commits.

Links

Conventional Commit https://www.conventionalcommits.org/en/v1.0.0/#summary

Photo by Markus Spiske on Unsplash

Reinstalling Your Must-Have Dev Tools and Extensions on Mac

A step-by-step guide to bringing your essential development setup back to life.

So, you’ve got a shiny new Mac, and you’re ready for a fresh start. But where do you begin? Installing the right tools and applications is crucial to get your development environment up and running smoothly. In this guide, I’ll walk you through using Mac terminal commands to streamline the installation process for essential programs like Homebrew, Visual Studio Code or SDKMAN. Let’s get your Mac setup right 💻!

Which applications have I installed?

If you want to see a list of all the applications installed on your old Mac before you start fresh on a new one, the terminal has a handy command for that. Simply open the Terminal and run the following command:

ls -1 /Applications

...

Discord.app

Docker.app

GIMP.app

...

It’s worth noting that the list generated by this command includes all applications in your /Applications folder, even those that can be easily reinstalled with Homebrew.

Homebrew

You’re a using Homebrew? Okay, then you’ll should do is to get a list of the software you have installed on your old Mac. Launch the Terminal, then execute this command:

brew list -1

==> Formulae
azure-cli
azure-functions-core-tools@4
base64
borgbackup
...

Visual Studio Code Extensions

To list all the Visual Studio Code extensions you’ve installed, you can easily do so by opening a terminal and typing the following command:

code --list-extensions

This will give you a straightforward list of all your extensions. But if you want to take it a step further—you can use the following command to generate a series of commands that will reinstall each extension:

code --list-extensions | xargs -L 1 echo code --install-extension

This command creates a list where each line is a command to install one of your extensions, like this:

...
code--install-extension editorconfig.editorconfig
code--install-extension esbenp.prettier-vscode
...

With this list, you can easily reinstall all your VS Code extensions on your new machine with minimal effort, ensuring your development environment is back to how you like it in no time.

SDKMAN!

If you’re using SDKMAN to manage your JDKs and other SDKs, there’s one more handy command worth mentioning. By running:

tree -L 2 ~/.sdkman/candidates/

...├── gradle│
   ├── 8.4│ 
   ├── 8.5│   
   ├── 8.6│   
   └── current -> 8.6
   ├── java│
   ├── 11.0.16.1-tem│
   ├── 17.0.8.1-tem│
   ├── 21.0.2-tem
...

you can quickly visualize all the SDKs and JDKs you have installed. This command provides a neat directory tree that displays each SDK along with its version. It’s a great way to quickly assess your development environment and ensure you have the right versions set up before diving back into coding on your new Mac.

Conclusion

And with that, we’ve come to the end of our guide on setting up your new Mac with all the essential development tools. Getting your new Mac set up with the right tools is the first step to a smooth and productive development environment. With some terminal commands you’re well on your way.

If you found this guide helpful, make sure to follow me for more tips and tricks on software development 😊.

Photo by Maxim Hopman on Unsplash

ByteBeat Odyssey – Refactoring

Of course it’s about refactoring

Welcome to the compilation of heartbeats from my work, a mosaic of moments, musings and flashes of inspiration on software development topics. Many posts have been published first on various social media channels. Here I will continue loosely these chronicles. Enjoy reading.

Refactoring

🛠️ Refactoring is the secret behind clean, efficient code!

1. Identify areas that need improvement.
2. Define what you aim to achieve through refactoring.
3. Break down the refactoring steps into manageable chunks.
4. Refactoring is an iterative process. Don’t hesitate to revisit and refine.
5. Ensure each change doesn’t alter the functionality!
6. Testing helps catch bugs early and ensures the code still works as intended.
7. Measure the improvements brought by refactoring.

Ready to give your codebase a makeover? ✨

ByteBeat Odyssey – Test-Driven Development

Today it’s TDD’s turn

Welcome to the compilation of heartbeats from my work, a mosaic of moments, musings and flashes of inspiration on software development topics. Many posts have been published first on various social media channels. Here I will continue loosely these chronicles. Enjoy reading.

Test-Driven Development

🔍 Looking to level up your development skills? Discover the power of Test-Driven Development (TDD) 🚀

Here is how the TDD cycle works:

1. Add a test, which fails (Red)
2. Run the tests. See if any test fails
3. Write enough code to pass all the tests (Green)
4. Run the tests again. If any test fails, go to step 3.
5. Refactor the code. (Refactor)

ByteBeat Odyssey – Software Tests

This time it’s about tests

Welcome to the compilation of heartbeats from my work, a mosaic of moments, musings and flashes of inspiration on software development topics. Many posts have been published first on various social media channels. Here I will continue loosely these chronicles. Enjoy reading.

Software Tests

Looking to take your tests to the next level? Then check out these tips:

1. small and focused tests
2. test only one specific aspect
3. check positive and negative cases
4. include only the code necessary to pass the tests
5. use code coverage to test all the relevant parts

ByteBeat Odyssey – ClickOps

Now it’s time for a bad practice

Welcome to the compilation of my series of heartbeats from my work, a mosaic of moments, musings and flashes of inspiration on software development topics. Many posts have been published first on various social media channels. Here I will continue loosely these chronicles. Enjoy reading.

ClickOps

𝐂𝐥𝐢𝐜𝐤𝐎𝐩𝐬: 1. 𝑒𝑟𝑟𝑜𝑟-𝑝𝑟𝑜𝑛𝑒, 𝑡𝑖𝑚𝑒-𝑐𝑜𝑛𝑠𝑢𝑚𝑖𝑛𝑔 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑐𝑙𝑖𝑐𝑘𝑖𝑛𝑔 𝑡ℎ𝑟𝑜𝑢𝑔ℎ 𝑣𝑎𝑟𝑖𝑜𝑢𝑠 𝑜𝑝𝑡𝑖𝑜𝑛𝑠, 2. 𝑚𝑎𝑛𝑢𝑎𝑙 𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑐𝑜𝑚𝑝𝑢𝑡𝑖𝑛𝑔 𝑖𝑛𝑓𝑟𝑎𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒 𝑜𝑟 3. 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑛𝑔 𝑎𝑢𝑡𝑜𝑚𝑎𝑡𝑒𝑑 𝑐𝑜𝑚𝑝𝑢𝑡𝑖𝑛𝑔 𝑖𝑛𝑓𝑟𝑎𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒.

ByteBeat Odyssey – Code Reviews

A new series from the daily work

Welcome to the compilation of my series of heartbeats from my work, a mosaic of moments, musings and flashes of inspiration on software development topics.

Each post is a snapshot, a glimpse, or a fleeting thought. This series captures the steps, missteps, and leaps that have occurred to me recently.

Many posts have been published first on various social media channels. At the request of many, I have collected them here. I will continue loosely my chronicles. Enjoy reading.

Code Reviews

Did the programmer not understand your comments in the code review?

These tips will help you improve your next review and give your comments the boost it needs:

1. Point out only specific problems.
2. Explain your opinion.
3. Include examples.

Validate your specs with OpenAPI Style Validator

OpenAPI Style Validator is a tool to create API specs with understandable descriptions, examples and consistent use of naming conventions.

The Validator helps developers to identify issues in OpenAPI specifications. With defined rules you can describe exactly how an API specification should look. These specs can be checked automatically and the results can be used in code reviews or even in a build pipeline where rule violations result in a build break.

Complete descriptions and naming conventions
The validator checks various objects of the OpenAPI schema, starting with the info object and the associated contact and license object. Often these details are not provided at all. Here you see as an example how the popular Petstore example provides the details.

{
"info": {
    "version": "1.0.0",
    "title": "Swagger Petstore",
    "description": "A sample API that uses a petstore as an example to demonstrate features in the OpenAPI 3.0 specification",
    "termsOfService": "http://swagger.io/terms/",
    "contact": {
      "name": "Swagger API Team",
      "email": "apiteam@swagger.io",
      "url": "http://swagger.io"
    },
    "license": {
      "name": "Apache 2.0",
      "url": "https://www.apache.org/licenses/LICENSE-2.0.html"
    }
  }
}

The next object we’ll take a closer look at is the operation object. But let’s start with the paths object. The paths object contains all the paths to existing endpoints (path items). A single path (/pets) contains operations which describe what http methods are allowed.

{
"/pets/{id}": {
    "get": {
      "description": "Returns a user based on a single ID, if the user does not have access to the pet",
      "operationId": "find pet by id",
      "parameters": [
        {
          "name": "id",
          "in": "path",
          "description": "ID of pet to fetch",
          "required": true,
          "schema": {
            "type": "integer",
            "format": "int64"
          }
        }
      ],
    }
   }
}

The OpenAPI Style Validator detects whether certain properties exist For example, the property „summary“ is missing in the above listing. In contrast, a „description“ is present. The absence of a property is an error, if you have configured it that way.

Let’s look how the OpenAPI Style Validator checks the data type descriptions. Data types are defined in the OpenAPI specification as schema objects that can be referenced in requests or responses (e.g. „$ref“: „#/components/schemas/NewPet“). The validator can check if all schema properties like description and example are present and not empty.

{
"NewPet": {
    "type": "object",
    "required": [
      "name"
    ],
    "properties": {
      "name": {
        "type": "string"
      },
      "tag": {
        "type": "string"
      }
    }
  },
}

If we look at the NewPet schema object in the above listing, we do not find descriptions and examples. Examples and descriptions in an API spec make the documentation more understandable.

Now let’s move on to naming conventions. Naming conventions help to make an API easier to use. The OpenAPI Style Validator supports a number of different conventions that we can apply to paths, path parameters, query parameters, cookies, headers and properties. The naming conventions are: the underscore case (snake_case), camel case as we know it from Java and JavaScript and the so called hyphen case, also known as Kebab Case.

Options and launching the OpenAPI Style Validator
With the defined rules we have learned now, we can control how an OpenAPI specification has to look. What kind of options do we have? There are boolean options like „validateOperationOperationId“, which by a true, requires that each operation has an id. The option „validateOperationSummary“ requires that operations also have a description. But there are also string type options like pathNamingConvention, parameterNamingConvention, pathParamNamingConvention and queryParamNamingConvention. With these options, we can determine if elements should follow e.g. the underscore case or camel case naming convention.

So, how do we launch the validator? The maven command looks like this:

mvn openapi-style-validator:validate

For Maven the OpenAPI Style Validator plugin must be configured inside the pom.xml Currently, io.swagger.core.v3 dependency must be excluded, otherwise a newer version of the library is used which unfortunately is incompatible with version 1.8 of the OpenAPI Style Validator. As you can in the following listing, each option can be added as a parameter under an XML tag, e.g. „validateOperationSummary“ with true or false as text content.

<plugin>
<groupId>org.openapitools.openapistylevalidator</groupId>
<artifactId>openapi-style-validator-maven-plugin</artifactId>
<version>1.8</version>
<configuration>
    <inputFile>petstore-expanded.json</inputFile>
</configuration>
<dependencies>
    <dependency>
        <groupId>org.openapitools.empoa</groupId>
        <artifactId>empoa-swagger-core</artifactId>
        <version>2.0.0</version>
        <exclusions>
            <exclusion>
             <groupId>io.swagger.core.v3</groupId>
                <artifactId>swagger-models</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
</dependencies>
</plugin>

And by the way, with the default configuration we get the following result for the Petstore example:

[INFO] --- openapi-style-validator:1.8:validate (default-cli) @ openapitools-validator-mvn-example ---
[INFO] Validating spec: petstore-expanded.json
[ERROR] OpenAPI Specification does not meet the requirements. Issues:

[ERROR]         *ERROR* in Operation GET /pets 'summary' -> This field should be present and not empty
[ERROR]         *ERROR* in Operation GET /pets 'tags' -> The collection should be present and there should be at least one item in it
[ERROR]         *ERROR* in Operation POST /pets 'summary' -> This field should be present and not empty
[ERROR]         *ERROR* in Operation POST /pets 'tags' -> The collection should be present and there should be at least one item in it
[ERROR]         *ERROR* in Operation GET /pets/{id} 'summary' -> This field should be present and not empty
[ERROR]         *ERROR* in Operation GET /pets/{id} 'tags' -> The collection should be present and there should be at least one item in it
[ERROR]         *ERROR* in Operation DELETE /pets/{id} 'summary' -> This field should be present and not empty
[ERROR]         *ERROR* in Operation DELETE /pets/{id} 'tags' -> The collection should be present and there should be at least one item in it
[ERROR]         *ERROR* in Model 'NewPet', property 'name', field 'example' -> This field should be present and not empty
[ERROR]         *ERROR* in Model 'NewPet', property 'name', field 'description' -> This field should be present and not empty
[ERROR]         *ERROR* in Model 'NewPet', property 'tag', field 'example' -> This field should be present and not empty
[ERROR]         *ERROR* in Model 'NewPet', property 'tag', field 'description' -> This field should be present and not empty
[ERROR]         *ERROR* in Model 'Error', property 'code', field 'example' -> This field should be present and not empty
[ERROR]         *ERROR* in Model 'Error', property 'code', field 'description' -> This field should be present and not empty
[ERROR]         *ERROR* in Model 'Error', property 'message', field 'example' -> This field should be present and not empty
[ERROR]         *ERROR* in Model 'Error', property 'message', field 'description' -> This field should be present and not empty
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  3.342 s

Links 
OpenAPI Style Validator https://github.com/OpenAPITools/openapi-style-validator
Petstore Beispiel https://github.com/OAI/OpenAPI-Specification/blob/main/examples/v3.0/petstore-expanded.json
Validation example https://github.com/claudioaltamura/openapi-tools/tree/main/part-two-validator/spring-boot-example

Project Pilot – Short Notes

Series on Software Design Practices – Part 3

In this small series on design practices, I write about various practices. In every article, I give a brief and simple explanation. In part 2 I wrote about prototypes. This time I present project pilots as another software development design practice.

We already talked about spikes, which help you to find approaches for a technical problem, or prototypes to test a specific concept. What these concepts have in common is that they should not be used in production. Spikes may not be good enough or prototypes only focus on feasibility.

Right here comes in the so-called project pilot. You add additional points regarding production readiness, that you are able to test the viability and how likely it is that the software succeed. So it’s the first phase in a larger project with a defined productive scope. And very importantly, project pilots provide you valuable feedback on ideas and concepts.

Project pilots are another good strategy for managing risk. The concept helps you to uncover potential flaws ahead of a full launch. Use this feedback to identify issues and correct them in advance.

Consent Management Platform von Real Cookie Banner