cacache
is a Node.js library for managing
local key and content address caches. It's really fast, really good at
concurrency, and it will never give you corrupted data, even if cache files
get corrupted or manipulated.
It was originally written to be used as npm's local cache, but can just as easily be used on its own
$ npm install --save cacache
- Example
- Features
- Contributing
- API
- Reading
- Writing
- Utilities
const cacache = require('cacache')
const fs = require('fs')
const tarball = '/path/to/mytar.tgz'
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'
// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(digest => {
console.log(`Saved content to ${cachePath}.`)
})
const destination = '/tmp/mytar.tgz'
// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
cachePath, key
).pipe(
fs.createWriteStream(destination)
).on('finish', () => {
console.log('done extracting!')
})
// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, tarballSha512).then(data => {
fs.writeFile(destination, data, err => {
console.log('tarball data fetched based on its sha512sum and written out!')
})
})
- Extraction by key or by content address (shasum, etc)
- Multi-hash support - safely host sha1, sha512, etc, in a single cache
- Automatic content deduplication
- Fault tolerance (immune to corruption, partial writes, etc)
- Consistency guarantees on read and write (full data verification)
- Lockless, high-concurrency cache access
- Streaming support
- Promise support
- Pretty darn fast
- Arbitrary metadata storage
- Garbage collection and additional offline verification
The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The Contributor Guide has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.
Lists info for all entries currently in the cache as a single large object. Each
entry in the object will be keyed by the unique index key, with corresponding
get.info
objects as the values.
cacache.ls(cachePath).then(console.log)
// Output
{
'my-thing': {
key: 'my-thing',
digest: 'deadbeef',
hashAlgorithm: 'sha512',
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
},
'other-thing': {
key: 'other-thing',
digest: 'bada55',
hashAlgorithm: 'whirlpool',
path: '.testcache/content/bada55',
time: 11992309289
}
}
Lists info for all entries currently in the cache as a single large object.
This works just like ls
, except get.info
entries are
returned as 'data'
events on the returned stream.
cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
key: 'my-thing',
digest: 'deadbeef',
hashAlgorithm: 'sha512',
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
{
key: 'other-thing',
digest: 'bada55',
hashAlgorithm: 'whirlpool',
path: '.testcache/content/bada55',
time: 11992309289
}
{
...
}
Returns an object with the cached data, digest, and metadata identified by
key
. The data
property of this object will be a Buffer
instance that
presumably holds some data that means something to you. I'm sure you know what
to do with it! cacache just won't care. hashAlgorithm
is the algorithm used
to calculate the digest
of the content. This algorithm must be used if you
fetch later with get.byDigest
.
If there is no content identified by key
, or if the locally-stored data does
not pass the validity checksum, the promise will be rejected.
A sub-function, get.byDigest
may be used for identical behavior, except lookup
will happen by content digest, bypassing the index entirely. This version of the
function only returns data
itself, without any wrapper.
This function loads the entire cache entry into memory before returning it. If
you're dealing with Very Large data, consider using get.stream
instead.
// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
metadata: {
thingName: 'my'
},
digest: 'deadbeef',
hashAlgorithm: 'sha512'
data: Buffer#<deadbeef>
}
// Look up by digest
cache.get.byDigest(cachePath, 'deadbeef', {
hashAlgorithm: 'sha512'
}).then(console.log)
// Output:
Buffer#<deadbeef>
Returns a Readable Stream of the cached data identified by key
.
If there is no content identified by key
, or if the locally-stored data does
not pass the validity checksum, an error will be emitted.
metadata
and digest
events will be emitted before the stream closes, if
you need to collect that extra data about the cached entry.
A sub-function, get.stream.byDigest
may be used for identical behavior,
except lookup will happen by content digest, bypassing the index entirely. This
version does not emit the metadata
and digest
events at all.
// Look up by key
cache.get.stream(
cachePath, 'my-thing'
).on('metadata', metadata => {
console.log('metadata:', metadata)
}).on('hashAlgorithm', algo => {
console.log('hashAlgorithm:', algo)
}).on('digest', digest => {
console.log('digest:', digest)
}).pipe(
fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
hashAlgorithm: 'sha512'
digest: deadbeef
// Look up by digest
cache.get.stream.byDigest(
cachePath, 'deadbeef', { hashAlgorithm: 'sha512' }
).pipe(
fs.createWriteStream('./x.tgz')
)
Looks up key
in the cache index, returning information about the entry if
one exists.
key
- Key the entry was looked up under. Matches thekey
argument.digest
- Content digest the entry refers to.hashAlgorithm
- Hashing algorithm used to generatedigest
.path
- Filesystem path relative tocache
argument where content is stored.time
- Timestamp the entry was first added on.metadata
- User-assigned metadata associated with the entry/content.
cacache.get.info(cachePath, 'my-thing').then(console.log)
// Output
{
key: 'my-thing',
digest: 'deadbeef',
path: '.testcache/content/deadbeef',
time: 12345698490,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
Inserts data passed to it into the cache. The returned Promise resolves with a
digest (generated according to opts.hashAlgorithm
) after the
cache entry has been successfully written.
fetch(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
return cacache.put(cachePath, 'registry.npmjs.org|[email protected]', data)
}).then(digest => {
console.log('digest is', digest)
})
Returns a Writable
Stream that inserts
data written to it into the cache. Emits a digest
event with the digest of
written contents when it succeeds.
request.get(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
cacache.put.stream(
cachePath, 'registry.npmjs.org|[email protected]'
).on('digest', d => console.log('digest is ${d}'))
)
cacache.put
functions have a number of options in common.
Arbitrary metadata to be attached to the inserted key.
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an EBADSIZE
error.
If present, the pre-calculated digest for the inserted content. If this option
if provided and does not match the post-insertion digest, insertion will fail
with an EBADCHECKSUM
error.
To control the hashing algorithm, use opts.hashAlgorithm
.
Default: 'sha512'
Hashing algorithm to use when calculating the digest for inserted data. Can use
any algorithm listed in crypto.getHashes()
or 'omakase'
/'お任せします'
to
pick a random hash algorithm on each insertion. You may also use any anagram of
'modnar'
to use this feature.
If provided, cacache will do its best to make sure any new files added to the
cache use this particular uid
/gid
combination. This can be used,
for example, to drop permissions when someone uses sudo
, but cacache makes
no assumptions about your needs here.
Default: null
If provided, cacache will memoize the given cache insertion in memory, bypassing any filesystem checks for that key or digest in future cache fetches. Nothing will be written to the in-memory cache unless this option is explicitly truthy.
There is no facility for limiting memory usage short of
cacache.clearMemoized()
, so be mindful of the sort of data
you ask to get memoized!
Reading from existing memoized data can be forced by explicitly passing
memoize: false
to the reader functions, but their default will be to read from
memory.
Clears the entire cache. Mainly by blowing away the cache directory itself.
cacache.rm.all(cachePath).then(() => {
console.log('THE APOCALYPSE IS UPON US 😱')
})
Alias: cacache.rm
Removes the index entry for key
. Content will still be accessible if
requested directly by content address (get.stream.byDigest
).
cacache.rm.entry(cachePath, 'my-thing').then(() => {
console.log('I did not like it anyway')
})
Removes the content identified by digest
. Any index entries referring to it
will not be usable again until the content is re-added to the cache with an
identical digest.
cacache.rm.content(cachePath, 'deadbeef').then(() => {
console.log('data for my-thing is gone!')
})
Completely resets the in-memory entry cache.
Returns a unique temporary directory inside the cache's tmp
dir. This
directory will use the same safe user assignment that all the other stuff use.
Once the directory is made, it's the user's responsibility that all files within
are made according to the same opts.gid
/opts.uid
settings that would be
passed in. If not, you can ask cacache to do it for you by calling
tmp.fix()
, which will fix all tmp directory permissions.
If you want automatic cleanup of this directory, use
tmp.withTmp()
cacache.tmp.mkdir(cache).then(dir => {
fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})
Creates a temporary directory with tmp.mkdir()
and calls cb
with it. The created temporary directory will be removed when the return value
of cb()
resolves -- that is, if you return a Promise from cb()
, the tmp
directory will be automatically deleted once that promise completes.
The same caveats apply when it comes to managing permissions for the tmp dir's contents.
cacache.tmp.withTmp(cache, dir => {
return fs.writeFileAsync(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
}).then(() => {
// `dir` no longer exists
})
Checks out and fixes up your cache:
- Cleans up corrupted or invalid index entries.
- Custom entry filtering options.
- Garbage collects any content entries not referenced by the index.
- Checks digests for all content entries and removes invalid content.
- Fixes cache ownership.
- Removes the
tmp
directory in the cache and all its contents.
When it's done, it'll return an object with various stats about the verification process, including amount of storage reclaimed, number of valid entries, number of entries removed, etc.
opts.uid
- uid to assign to cache and its contentsopts.gid
- gid to assign to cache and its contentsopts.filter
- receives a formatted entry. Return false to remove it. Note: might be called more than once on the same entry.
echo somegarbage >> $CACHEPATH/content/deadbeef
cacache.verify(cachePath).then(stats => {
// deadbeef collected, because of invalid checksum.
console.log('cache is much nicer now! stats:', stats)
})
Returns a Date
representing the last time cacache.verify
was run on cache
.
cacache.verify(cachePath).then(() => {
cacache.verify.lastRun(cachePath).then(lastTime => {
console.log('cacache.verify was last called on' + lastTime)
})
})