Make your JavaScript project accessible to newcomers

Published on

In frontend development we talk a lot about accessibility; for people who are using screen readers, people with vestibular disorders etc., but in this post I would like to talk about a different kind of accessibility—when a new team member starts working on a project, how long does it take for them to get the hang of developing it?

The term that I find most suitable for this is developer experience, or DX.

In my modest experience providing an onboarding process is more often a fantasy than reality. Newcomers are usually left alone to figure things out and ask questions if something is not clear, which is fine as long as the team has been putting continuous effort into making the project easy to use. Having to ask many questions doesn’t only slow down other team members, it can also be demotivating to the newcomer and make them feel neglected or, even worse, inadequate.

Older projects whose teams had undergone little to no changes are especially vulnerable to poor DX because everyone either figured the project out already or they were there from the beginning. The more time passes, the harder it is to see the project as a beginner, and the less motivation there is to improve DX because people got used to the problems.

Progressive disclosure

Newcomers shouldn’t have to ask too many questions, just like people should be able to use apps without reading a manual. It should be self-explanatory, and information should be revealing itself gradually when people need it. The analogy for this in interaction design is called progressive disclosure:

“Progressive disclosure is an interaction design pattern that sequences information and actions across several screens (e.g., a step-by-step signup flow). The purpose is to lower the chances that users will feel overwhelmed by what they encounter.”

I see no reason not to follow a similar pattern when designing projects’ build environments. Avoid having dozens of npm scripts, of which only several are meant to be run by humans. Reveal the logic of the project’s environment progressively.

Improve readability

A common example of readability issues in JavaScript projects is package.json’s scripts field, a.k.a. npm scripts, which often look like this:

{
  "scripts": {
    "build": "yarn build:cjs && yarn build:esm",
    "build:base": "babel src --ignore **/__tests__/**/*,**/__mocks__/**/*,**/*.stories.js",
    "build:cjs": "cross-env BABEL_ENV=cjs yarn build:base --out-dir cjs",
    "build:esm": "cross-env BABEL_ENV=esm yarn build:base --out-dir esm"
  }
}

One could argue that fleshing out the logic of every script makes it easier to understand how to use the project, but I can barely read the scripts above, and once I figure it out I don’t want to see the whole thing over and over again, I just want the gist of it so I can use them.

Another thing is that I will never have to run build:base, it’s just a way to keep build:cjs and build:esm DRY, and most of the time I won’t even have to run those two scripts either, except when debugging, the rest of the time they will just be in the way. How about having only npm scripts which are meant to be run by humans? One way to achieve this is to move the underlying logic to a gulpfile.js, which offers much more space to express yourself. You can use the Node.js API of your tools to write nicer code, or use a plugin like gulp-babel, and export only tasks that you want people to run. It’s especially useful for task composition, i.e. running tasks in series or parallel. Read my post about managing complex tasks with gulp v4 if you’re interested in this approach.

Many people who want to solve task composition in npm scripts eventually reach for npm-run-all awlkgejgaw, but in my experience it doesn’t make scripts clearer, only shorter. Besides, how are people who have never heard of npm-run-all supposed to figure out where run-p is coming from in order to understand what it means?

Less development documentation

Development documentation can help when used sparingly and as close to the code as possible, best in the form of comments. Otherwise it’s prone to being outdated and people tend not to trust it, at least I don’t.

It’s also worth noting that good naming strategy and making navigating the codebase more obvious is better than descriptive documentation, and sometimes we can’t really document some stuff where we want to, e.g. package.json cannot contain comments, and if we document npm scripts in README.md, it’s almost bound to get outdated because as we change npm scripts we might forget to update the documentation. Out of sight, out of mind.

If there are situations when you can update parts of the documentation automatically, take those opportunities to decrease maintenance. For example, I wanted to document which browsers is supported by a frontend project I was working on, so I generated that list by mapping browserslist configuration to browser names using caniuse-db’s data.json, for example:

const browserslist = require("browserslist");
const caniuse = require("caniuse-db/data.json").agents;
 
const browsers = browserslist().map(b => {
	const [id, version] = b.split(" ");
	return `${caniuse[id].browser} ${version}`;
});

I inserted this list into the docs using markdown-magic, see the project’s markdown.config.js.

Enforce requirements and limitations

Run checks automatically, don’t leave it up to people to have to remember to run them. Today we have tools like husky and lint-staged, so there is no need to leave this up to chance. Just like it’s best not to put off for tomorrow what you can do today, don’t put off until CI what you can do in a git hook or a pre*/post* npm script.

Did someone forget to run tests before publishing the package? Add a prepublishOnly npm script and stop them in the future! See other npm scripts to find exactly which ones you want, e.g. people often mistakenly use prepare instead of prepublishOnly, which runs bunch of build and test scripts every time dependencies are installed and can significantly hurt DX.

If people are committing a file with a problem that ESLint can automatically fix, do it in the pre-commit hook by combining husky and lint-staged. That way contributors don’t even have to know about the problem, especially if it was just related to formatting. If you want your commit messages to follow certain conventions, like Conventional Commits, enforce it using a tool like commitlint in the commit-msg hook. The error message should efficiently communicate what guidelines their commit messages should follow, and nothing less. If the tool itself doesn’t provide an adequate error message, catch the error and add throw a better one! Users are not the only ones worthy of a good failure experience, your colleagues are as well, especially newcomers. A good error message can be a substitute for a piece of documentation because it explains what went wrong (and what to do about it) right on the spot. No amount of guidelines is that powerful.

If your noticed that your development environment fails on Yarn versions lower than, say, 1.22, you can enforce that, too. One way to do this is by installing the desired version in the project itself:

# alias for "yarn policies set-version" introduced in yarn 1.22
yarn set version 1.22

Another way of enforcing the limitation is to use the engines field in package.json instead:

{
  "private": true,
  "engines": {
    "yarn": ">=1.22"
  }
}

This will cause yarn install to fail for people who have a lower version of Yarn. However, you generally want to do this only in package.jsons of applications and monorepos, this is why I added "private": true, because if you add this to a package.json of a library, the limitation will transfer to the published code, i.e. the consumer! For libraries the engines field is commonly used to specify which version of Node it supports.

Long story short: try to enforce as many technical requirements as you can, so that in CONTRIBUTING.md you can focus on higher-level guidelines.

Test your build process

Adding tests for complex parts of your development environment may sound a little meta and possibly like overengineering, but problems in your build process can be even harder to detect than problems in runtime functionality.

Build process usually outputs files, so the first question that might arise is: how do we test that? We don’t want our tests to write to the filesystem. The answer is to mock fs with a module like memfs, which I used a lot and highly recommend. Does your build process send network requests? You can might be able to use MSW to mock an API. I’m saying “might” because I haven’t used it yet, but it seems like the right tool for the job.

If you don’t feel confident about some parts of the code, it’s never silly to test them, regardless of whether they are user-facing or developer-facing. Don’t let technicalities stop you.

”DX Developer” or “Platform Developer”

Tooling is complicated, and many people just want to get the configuration out of the way, but in my opinion this should be a continuous process handled with care by people who actually like doing it. This is why I believe there should be a separate role dedicated for this, which I call DX Developer, but there various other names for this, like “Web Platform”. In this role I’m solely focused on making development easier, and the fact that coworkers are personally thanking me for it is proof that this type of work is helpful. I’m no longer actively participating in user-facing features, so I no longer have any idea what is my preferred way of fetching and syncing data in React because, despite working on React projects, my field of expertise has significantly shifted.

Some people replied to my tweet arguing that every developer should know this stuff, but aren’t there already enough things that every frontend developer is expected to know? It’s useful to know the basics of webpack, but should everyone be an expert on code splitting? It’s a good habit to document things, but we do have technical writers, don’t we? So let’s consider having DX developers, too!

Conclusion

Frontend projects get really complex really fast, there’s a lot of tooling required just to start. By continuously improving the developer experience we’re increasing the versatility of the project as people come and go, and the likeliness that people will stay on the team. A project that speaks for itself, so to speak, is much easier to develop. If one team member has to explain why they did something a certain way, that’s a red flag because that person isn’t going to be around forever, the decision should make sense on its own.

If your team struggles with working on a project, but don’t have time, will or skills to improve tooling, it might be time to allocate resources for a DX developer.