SuiteScript Example - Read File Line by Line
Created: November 24, 2020
Just as we can write to a file line by line, we can also read line by line:
/**
* Custom module for executing N/file cookbook examples
* @NApiVersion 2.0
* @NModuleScope SameAccount
*/
define(['N/file'], function (f) {
var exports = {}
function readFile (response) {
var weatherFile = f.load({ id: 'SuiteScripts/weather.csv' })
var weatherData = []
weatherFile.lines.iterator().each(function (line) {
var w = line.value.split(',')
weatherData.push({ date: w[0], low: w[1], high: w[2] })
return true
})
response.write({ output: JSON.stringify(weatherData) })
}
exports.readFile = readFile
return exports
});
Load the File
var weatherFile = f.load({ id: 'SuiteScripts/weather.csv' })
Here we load the same CSV file as we created in the previous example.
File.lines.iterator()
weatherFile.lines.iterator()
File
instances provide us with a lines
Iterator which we can use to walk the
lines of a File one by one.
Iterate and process each line
weatherFile.lines.iterator().each(function (line) {
var w = line.value.split(',')
weatherData.push({ date: w[0], low: w[1], high: w[2] })
return true
})
The Iterator provides an each()
method which will loop through the lines,
passing each line to the callback function
you provide.
var w = line.value.split(',')
The line
passed in is an Object, and you can retrieve the contents of the line
from its value
property. Here we do a naive split()
on all commas as we
assume none of our column values contain
commas. We then reconstruct the Objects we used initially when we defined
WeatherData
and push()
the Object onto the weatherData
Array.
Your callback function can return false
to stop or true
to continue, similar
to the way the each()
iterator works on Search Results. Returning nothing is
the same as returning false
. If you find that your script is only processing
the first line of the file, I can almost guarantee it's because you forgot to
return true
.
Note that the lines
Iterator can only be used on Text or CSV file types, and
each line can be no more than 10MB (still a ludicrously high limit).
Why Stream?
It might not be immediately obvious why you would read a file this way. Why not get the entire contents and get to work?
Files can be large; extremely large. Loading the entire contents of a huge file
would immediately consume the entire
memory limit for your script. Further, you are presumably going to do something
interesting with each line of a CSV, and
almost anything interesting uses governance. Trying to process a massive file
all at once is almost guaranteed to run
you into the governance limit for your script.
Instead, you can use this approach to process the file without pushing up
against either of these limits, allowing you
to stop and check your governance threshold, store your progress for next time,
and react accordingly.
You can also use this as a preparatory step to processing large files, chunking them out into smaller, more manageable file sizes.
HTH
-EG