In today’s DevOps world, you are trying to automate away the boring tasks.
Like many, many other projects, I track my code in git.
So part of my “Initial commit” is a
Normally I am turning to the wonderful gitignore.io website to
generate a starting point. Project specific additions then are added at the
end. They have an endpoint with all valid keywords
if you are interested.
Every project should have one. Really, I mean it.
To me, the minimum contains:
- Name of the project.
- A short description what it is about.
- Installation instruction.
- Getting started / Usage.
- Link to license.
In short, everything to help evaluate the repo and get you started.
Speaking of license… if you don’t create your repo via GitHub’s Web UI, choosealicense offers a nice way to add a license file.
This way you are opening up your product to contributions from others :-)
If you add a license, don’t forget to add the SPDX identifier to the
Now, let’s get into more details of a software repo.
I recommend to have certain things set up for every one:
In order to harmonise the way we indent our code etc. editorconfig is the way to go.
It allows you to specify the number of spaces to which a tab is expanded. Or the line break. Check it out.
Gone are the times, where you had to use “magic comments” for Vim and Emacs :-)
Personally I’ve learned to ditch web-ui and go with git-flow on the CLI. It makes management of branches etc. so much easier.
From senior developers I’ve heard about alternatives like trunkbased development.
Continuos Integration and Continous Deployment
Also known as CI/CD.
Personally I am using CircleCI, since it does not ask for many permissions. Travis CI looks more popular though.
The important thing is to check integrateability of every commit you push. Ideally you are writing tests (especially unit and end-to-end ones) to give you some assurance, that you didn’t broke anything.
If the codebase becomes larger, this will make you sleep well :-)
The continuous deployment ensures, that your code makes it to a server. You can decide, whether this should be a staging or production instance.
What’s important to me here is the configuration-as-code. This allows me to have version control, the ability to review and share it and the option to trace changes back to their origin.
To me, it’s important that I have known-good states of my code, so that I can rollback if necessary. This will get a bit hairy if some persistance layer is involved (say, a database migration).
Some people in the community propose to „fail forward” instead, but the opinions are ranging from mixed to dangerous from what I’ve heard.
Make up your mind what you want to do in case of everything going wrong. Since it will happen anyway.
Security is important. That’s why you should tackle it along the whole lifecycle of your project.
There are services to help you along the way. I want to highlight snyk. It checks your code for known security vulnerabilities and informs you (if possible with instructions on how to patch).
In case you want to use it with more than one org, don’t forget to set the SNYK_TOKEN environment variable.
Since it’s the same language, I can apply the same tooling to ease my job.
Plus, a text file can be put under version control. This way you can recap, at what point in time you upgrade,
A node version manager is especially helpful if you are working under certain constraints like in a system, where you can’t have the global node environment updated to the version you need.
Similiar but different is the config files for
npm - the node package
manager. An alternative is
npx, but I like to stick to the
original vendor if possible.
.npmrc I add config options to disable the progress bar to speed up
the install process and tell
npm to always install the exact version of a
If you need reproducible builds, go for
Many others have blogged why, so I won’t repeat them here.
npm is good enough, though.
You can add metadata like the link to the homepage, bugtracker or add contact details of the original author and her contributors here.
The most important aspect is likely the tracking of the dependencies, though.
Again, it gives you a way to trace back package upgrades (which you should do regularly by the way).
If you are going to apply semantic versioning, I can recommend
npm version, since it not only updates your
but also creates a
git tag for you. The only thing left to do for you is
pushing the changes respecitvely publish the updated
version of your package :-)
This way, I can make sure, that my code looks coherent. That’s important to me, because it saves me from thoughts like
Was there a reason, why this piece of code looks different then the others?
Be aware, that ESLint is really powerful. You can configure loads of things. To speed up your job they are offering shareable configs.
Oh, one last thing. Go for a local install instead of a global one. This will save you some time if you are using plugins and something goes south.
Recently I’ve learned about ink-docstrap to make it look nicer.
You could use Flow (or TypeScript) without JSDoc, but would need to compile the code before you can use it.
Since you may need different formats for the code you ship, you should use a bundler.
The most popular one is webpack, but that’s a Swiss Army knife.
If you prefer something simple, give rollup a shot, which does an incredible job! You can use it to turn your ES6 modules in IIFE (Immediately Invoked Function Expression), for Flow and testing as CommonJS and for general usage by others as UMD.
All from a single config file, which is quite easy to understand!
If you don’t like that, give renovate a try (a fried of mine is using it).
Uff, yeah, that’s a lot to do. But you should be able to tackle it within four hours or so. The good thing is, that you will have to do this boilerplate only once.
I saw people reaching to CLIs and generators if their project reaches this kind of complexiness. In my opinion it’s good to walk through these steps manually because it is about decisions I want to do make conciously.
Your mileage may vary :-)