Create URL friendly short IDs just like YouTube.
Suitable for generating -
- short IDs for new users.
- referral code for a user in an affiliate program.
- file names for user uploaded documents / resources.
- short URLs (like bitly) for sharing links on social media platforms.
- URL slug for dynamically generated content like blog posts, articles, or product pages.
Works with ES6 (ECMAScript):
as well as with CommonJS:
Using npm:
npm i ytid
Using yarn:
yarn add ytid
Using pnpm:
pnpm i ytid
With ES6 (ECMAScript):
import { ytid } from "ytid";
console.log(ytid()); // gocwRvLhDf8
With CommonJS:
const { ytid } = require("ytid");
console.log(ytid()); // dQw4w9WgXcQ
YouTube uses 0-9
, A-Z
, a-z
, _
and -
as possible characters for the IDs. This makes each position in the ID have one of these 64 characters. However, as capital I
and lowercase l
appear very similar in the URLs (I
→ I, l
→ l), ytid excludes them.
Hence, ytid uses 0-9
, A-H
, J-Z
, a-k
, m-z
, _
and -
as possible characters in the ID.
A Backlinko's study, based on an analysis of 11.8 million Google search results, found that short URLs rank above long URLs.
And a Brafton study found a correlation between short URLs and more social shares, especially on platforms such as Twitter which have character limits.
These studies highlight the benefits of short URLs over long ones.
All the generated IDs are checked against a dataset of offensive / profane words to ensure they do not contain any inappropriate language.
As a result, ytid doesn't generate IDs like 7-GoToHell3
or shit9RcYjcM
.
The dataset of offensive / profane words is a combination of various datasets -
These datasets undergo the following preprocessing steps -
- Firstly, all the datasets are combined into a single dataset.
- Then the duplicate instances are removed.
- Then two new datasets are created -
- A dataset in which all spaces are replaced with
-
. - A dataset in which all spaces are replaced with
_
.
- A dataset in which all spaces are replaced with
- These two datasets are then combined to form a new dataset.
This ensures that the dataset contains phrases with spaces in the form of hyphen separated words as well as underscore separated words. - Then, duplicate values are removed from this new dataset.
- Finally, only the instances that match the regex pattern
^[A-Za-z0-9_-]{0,11}$
are kept, while the rest are removed. This keeps the number of instances to a minimum by removing unnecessary words or phrases.
Preprocessing yields a dataset of 3656 instances, that helps ensure the generated IDs are safe for using in URLs and for sharing on social media platforms.
The preprocessing was done on this Colab Jupyter notebook.
Future release(s) will expand the dataset to include words / phrases from other languages (that use English alphabets).