File handling in server-side JavaScript

Published on:


const fs = require('fs');

const filename="binary.bin";

fs.readFile(filename, (err, knowledge) => {
  if (err) {
    console.error('Error studying file:', err);
    return;
  }
  console.log(knowledge);

  // course of the Buffer knowledge utilizing Buffer strategies (e.g., slice, copy)
});

One other aspect of coping with information is streaming in chunks of information, which turns into a necessity when coping with giant information. Right here’s a contrived instance of writing out in streaming chunks:


const fs = require('fs');

const filename="large_file.txt"; 
const chunkSize = 1024 * 1024; // (1)
const content material="That is some content material to be written in chunks."; // (2)
const fileSizeLimit = 5 * 1024 * 1024; // // (3)

let writtenBytes = 0; // (4)

const writeStream = fs.createWriteStream(filename, { highWaterMark: chunkSize }); // (5)

perform writeChunk() { // (6)
  const chunk = content material.repeat(Math.ceil(chunkSize / content material.size)); // (7)

  if (writtenBytes + chunk.size  fileSizeLimit) {
      console.error('File dimension restrict reached');
      writeStream.finish(); 
      return;
    }
    console.log(`Wrote chunk of dimension: ${chunk.size}, Complete written: ${writtenBytes}`);
  }
}

writeStream.on('error', (err) => { // (10)
  console.error('Error writing file:', err);
});

writeStream.on('end', () => { // (10)
  console.log('Completed writing file');
});

writeChunk(); 

Streaming provides you extra energy, however you’ll discover it includes extra work. The work you’re doing is in setting chunk sizes after which responding to occasions primarily based on the chunks. That is the essence of avoiding placing an excessive amount of of an enormous file into reminiscence without delay. As a substitute, you break it into chunks and cope with each. Listed here are my notes in regards to the fascinating elements of the above write instance:

- Advertisement -
  1. We specify a bit dimension in kilobytes. On this case, we have now a 1MB chunk, which is how a lot content material can be written at a time.
  2. Right here’s some faux content material to jot down.
  3. Now, we create a file-size restrict, on this case, 5MB.
  4. This variable tracks what number of bytes we’ve written (so we will cease writing after 5MB).
  5. We create the precise writeStream object. The highWaterMark component tells it how massive the chunks are that it’ll settle for.
  6. The writeChunk() perform is recursive. Each time a bit must be dealt with, it calls itself. It does this until the file restrict has been reached, during which case it exits.
  7. Right here, we’re simply repeating the pattern textual content till it reaches the 1MB dimension.
  8. Right here’s the fascinating half. If the file dimension will not be exceeded, then we name writeStream.write(chunk):
    1. writeStream.write(chunk) returns false if the buffer dimension is exceeded. Meaning we will’t match extra within the buffer given the dimensions restrict.
    2. When the buffer is exceeded, the drain occasion happens, dealt with by the primary handler, which we outline right here with writeStream.as soon as('drain', writeChunk);. Discover that this can be a recursive callback to writeChunk.
  9. This retains observe of how a lot we’ve written.
  10. This handles the case the place we’re performed writing and finishes the stream author with writeStream.finish();.
  11. This demonstrates including occasion handlers for error and end.

And to learn it again off the disk, we will use an identical method:

See also  Astrocade raises $12M for AI-based social gaming platform
- Advertisment -

Related

- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here