-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parse base64-encoded data URIs more efficiently #10434
base: main
Are you sure you want to change the base?
Conversation
(in some places) Very long data: URIs in source documents are causing outsized memory usage due to various parsing inefficiencies, for instance in Network.URI, TagSoup, and T.P.R.Markdown.source. See e.g. jgm#10075. This change improves the situation in a couple places we can control relatively easily by using an attoparsec text-specialized parser to consume base64-encoded strings. Attoparsec's takeWhile + inClass functions are designed to chew through long strings like this without doing unnecessary allocation, and the improvements in peak heap allocation are significant. One of the observations here is that if you parse something as a valid data: uri it shouldn't need any further escaping so we can short-circuit various processing steps that may unpack/iterate over the chars in the URI.
parseBase64String = do | ||
Sources ((pos, txt):rest) <- getInput | ||
let r = A.parse pBase64 txt | ||
case r of | ||
Done remaining consumed -> do | ||
let pos' = incSourceColumn pos (T.length consumed) | ||
setInput $ Sources ((pos', remaining):rest) | ||
return consumed | ||
_ -> mzero | ||
|
||
pBase64 :: A.Parser Text | ||
pBase64 = do | ||
most <- A.takeWhile1 (A.inClass "A-Za-z0-9+/") | ||
rest <- A.takeWhile (== '=') | ||
return $ most <> rest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two thoughts on this:
- Is attoparsec really necessary? Why not just
Data.Text.takeWhile
? - My experience is that parsers like this, which just manipulate the input directly using
getInput
andsetInput
, are problematic in parsec because parsec doesn't realize that input has been consumed. I've had to use a regular parsec parser somewhere in there to make it realize this. One option is just something likecount charsConsumed anyChar
, and then you don't need to compute the end position manually...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- In this spot I can possibly just borrow the fast
inClass
function from attoparsec and use it with regular texttakeWhile
, will have to fiddle with it. - will investigate, I took it for granted that I would make the parsec state happy by fiddling with the input as seen here but that was not based on deep understanding or rigorous analysis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably fine to use attoparsec, but there might be a slight speedup if you can avoid it.
On 2: you could try putting this parser under many
and see if parsec complains.
(in some places)
Very long data: URIs in source documents are causing outsized memory usage due to various parsing inefficiencies, for instance in Network.URI, TagSoup, and T.P.R.Markdown.source. See e.g. #10075.
This change improves the situation in a couple places we can control relatively easily by using an attoparsec text-specialized parser to consume base64-encoded strings. Attoparsec's takeWhile + inClass functions are designed to chew through long strings like this without doing unnecessary allocation, and the improvements in peak heap allocation are significant.
One of the observations here is that if you parse something as a valid data: uri it shouldn't need any further escaping so we can short-circuit various processing steps that may unpack/iterate over the chars in the URI.
The code here is organized a little bit randomly and I expect it needs more work to go in.
Good improvements though. The peak heap allocation for html -> markdown for the example provided by @ebeigarts in #10075 goes down from ~2900MB to ~2500 MB on my computer, and markdown -> json goes from 1577 MB to 73 MB! As discussed in the issue comments HTML reading has, at least, a TagSoup issue.