Thomas Reggi's Profile Image

@thomasreggi 🌸

Can we have Deno-like URL imports in Node.js?

November 4, 2020 (Syndicated From

The decentralization of dependencies is one of my favorite features from Deno. Something Deno also does is simplifies the process of publishing and managing dependencies. Any file online can independently can be included in another project, and only it’s tree of dependencies will be pulled in. On the flip side using npm modules, if you were to require a single file, that used a single npm module you unfortunately have to include ALL npm dependencies in your project.

I would love a way to include url-imports in node, and I have a few thoughts about what that looks like.

  1. Do not use existing require or import keywords, use a third-party module, or, use a seperate command to run node.
  2. Never fetch asynchronously at runtime, having a url-import install command that parses the file and downloads locks / files.
  3. The need to accommodate for npm packages, given a url we have to scanning or resolve package.json, package-lock.json, yarn.json, yarn.lock, npm-shrinkwrap.json at every directory level.
  4. The need to accommodate for tsconfig.json, searching for the file within the url structure at every directory level, and applying individual configs to specific files.
  5. Locking hashes of all files / urls downloaded, throwing exceptions for mismatching hashes.
  6. Need to create vscode plugin to add type support.

The Vision

This is what it looks like if url-import is a third-party module. Running url-import install would download the file and do a couple of other checks:

import { urlImport } from 'url-import';
const file = urlImport('')
  • Download to a common folder ~/url-import.
  • Parse bar.ts for require, import and urlImport
    • If there are local dependencies, download those files.
    • If there are package imports start checking for package.json.
  • Check
  • If above not found check
  • Save a url-import.lock in the current-working-directory and include a “snapshot” that looks something like this { fileUrl, fileHash, tsconfigUrl, tsConfigHash, packageUrl, packageHash } essentially, save all the urls used / found, and hash the contents of every file. This will allow us to confirm that the state can be replayed and track changes.
  • Check
  • If above not found check
  • Pluck the npm modules in the files crawled and match them up with their relevant resolved package.json.
  • Confirm all hashes / locks


I’d love it if we would have a more robust system for managing dependencies in Node.js, and I wish that the Node.js team was interested in creating a system using urls, but this is really hard to do because of NPM and because mixing URL imports and NPM imports means making a lot of requests and crawling urls.

What do you think? Does Node.js need to step away from NPM? Should we all just switch to Deno?