How to unit test a Python function that draws PDF graphics?
Question:
I’m writing a CAD application that outputs PDF files using the Cairo graphics library. A lot of the unit testing does not require actually generating the PDF files, such as computing the expected bounding boxes of the objects. However, I want to make sure that the generated PDF files “look” correct after I change the code. Is there an automated way to do this? How can I automate as much as possible? Do I need to visually inspect each generated PDF? How can I solve this problem without pulling my hair out?
Answers:
You could capture the PDF as a bitmap (or at least a losslessly-compressed) image, and then compare the image generated by each test with a reference image of what it’s supposed to look like. Any differences would be flagged as an error for the test.
The first idea that pops in my head is to use a diff utility. These are generally used to compare texts of documents but they might also compare the layout of the PDF. Using it, you can compare the expected output with the output supplied.
The first result google gives me is this. Altough it is commercial, there might be other free/open source alternatives.
(See also update below!)
I’m doing the same thing using a shell script on Linux that wraps
- ImageMagick’s
compare
command
- the
pdftk
utility
- Ghostscript (optionally)
(It would be rather easy to port this to a .bat
Batch file for DOS/Windows.)
I have a few reference PDFs created by my application which are "known good". Newly generated PDFs after code changes are compared to these reference PDFs. The comparison is done pixel by pixel and is saved as a new PDF. In this PDF, all unchanged pixels are painted in white, while all differing pixels are painted in red.
Here are the building blocks:
pdftk
Use this command to split multipage PDF files into multiple singlepage PDFs:
pdftk reference.pdf burst output somewhere/reference_page_%03d.pdf
pdftk comparison.pdf burst output somewhere/comparison_page_%03d.pdf
compare
Use this command to create a "diff" PDF page for each of the pages:
compare
-verbose
-debug coder -log "%u %m:%l %e"
somewhere/reference_page_001.pdf
somewhere/comparison_page_001.pdf
-compose src
somewhereelse/reference_diff_page_001.pdf
Ghostscript
Because of automatically inserted meta data (such as the current date+time), PDF output is not working well for MD5hash-based file comparisons.
If you want to automatically discover all cases which consist of purely white pages, you could also convert to a meta-data free bitmap format using the bmp256
output device. You can do that for the original PDFs (reference and comparison), or for the diff-PDF pages:
gs
-o reference_diff_page_001.bmp
-r72
-g595x842
-sDEVICE=bmp256
reference_diff_page_001.pdf
md5sum reference_diff_page_001.bmp
If the MD5sum is what you expect for an all-white page of 595×842 PostScript points, then your unit test passed.
Update:
I don’t know why I didn’t previously think of generating a histogram output from the ImageMagick compare
…
The following is a command pipeline chaining 2 different commands:
- the first one is the same as the above
compare
which generates the ‘white pixels are equal, red pixels are differences’-format, only it outputs the ImageMagick internal miff
format. It doesn’t write to a file, but to stdout.
- the second one uses
convert
to read stdin, generate a histogram and output the result in text form. There will be two lines:
- one indicating the number of white pixels
- the other one indicating the number of red pixels.
Here it goes:
compare
reference.pdf
current.pdf
-compose src
miff:-
|
convert
-
-define histogram:unique-colors=true
-format %c
histogram:info:-
Sample output:
56934: (61937, 0, 7710,52428) #F1F100001E1ECCCC srgba(241,0,30,0.8)
444056: (65535,65535,65535,52428) #FFFFFFFFFFFFCCCC srgba(255,255,255,0.8)
(Sample output was generated by using these reference.pdf and current.pdf files.)
I think this type of output is really well suited for automatic unit testing. If you evaluate the two numbers, you can easily compute the "red pixel" percentage and you could even decide to return PASSED or FAILED based on a certain threshold (if you don’t necessarily need "zero red" for some reason).
I would try this using xpresser – (https://wiki.ubuntu.com/Xpresser ) You can try to match images to similar images not exact copies – which is the problem in these cases.
I don’t know if xpresser is being ctively developed, or if it can be used with stand alone image files (I think so) — anyway it takes its ideas from teh Sikuli project (which is Java with a Jython front end, while xpresser is Python).
I wrote a tool in Python to validate PDFs for my employer’s documentation. It has the capability to compare individual pages to master images. I used a library I found called swftools to export the page to PNG, then used the Python Imaging Library to compare it with the master.
The relevant code looks something like this (this won’t run as there are some dependencies on other parts of the script, but you should get the idea):
# exporting
gfxpdf = gfx.open("pdf", self.pdfpath)
if os.path.isfile(pngPath):
os.remove(pngPath)
page = gfxpdf.getPage(pagenum)
img = gfx.ImageList()
img.startpage(page.width, page.height)
page.render(img)
img.endpage()
img.save(pngPath)
return os.path.isfile(pngPath)
# comparing
outPng = os.path.join(outpath, pngname)
masterPng = os.path.join(outpath, "_master", pngname)
if os.path.isfile(masterPng):
output = Image.open(outPng).convert("RGB") # discard alpha channel, if any
master = Image.open(masterPng).convert("RGB")
mismatch = any(x[1] for x in ImageChops.difference(output, master).getextrema())
"cmppdf" compares either the visual appearance or text content of PDFs.
It is a bash script, downloadable from https://abhweb.org/jima/cmppdf?v
It uses pdftk
and compare
to graphically compare PDFs, similar to what others have described in other answers. Meta data (anything which does not change the actual appearance) is not compared.
The text-comparison option uses pdftotxt
and diff
.
I’m writing a CAD application that outputs PDF files using the Cairo graphics library. A lot of the unit testing does not require actually generating the PDF files, such as computing the expected bounding boxes of the objects. However, I want to make sure that the generated PDF files “look” correct after I change the code. Is there an automated way to do this? How can I automate as much as possible? Do I need to visually inspect each generated PDF? How can I solve this problem without pulling my hair out?
You could capture the PDF as a bitmap (or at least a losslessly-compressed) image, and then compare the image generated by each test with a reference image of what it’s supposed to look like. Any differences would be flagged as an error for the test.
The first idea that pops in my head is to use a diff utility. These are generally used to compare texts of documents but they might also compare the layout of the PDF. Using it, you can compare the expected output with the output supplied.
The first result google gives me is this. Altough it is commercial, there might be other free/open source alternatives.
(See also update below!)
I’m doing the same thing using a shell script on Linux that wraps
- ImageMagick’s
compare
command - the
pdftk
utility - Ghostscript (optionally)
(It would be rather easy to port this to a .bat
Batch file for DOS/Windows.)
I have a few reference PDFs created by my application which are "known good". Newly generated PDFs after code changes are compared to these reference PDFs. The comparison is done pixel by pixel and is saved as a new PDF. In this PDF, all unchanged pixels are painted in white, while all differing pixels are painted in red.
Here are the building blocks:
pdftk
Use this command to split multipage PDF files into multiple singlepage PDFs:
pdftk reference.pdf burst output somewhere/reference_page_%03d.pdf
pdftk comparison.pdf burst output somewhere/comparison_page_%03d.pdf
compare
Use this command to create a "diff" PDF page for each of the pages:
compare
-verbose
-debug coder -log "%u %m:%l %e"
somewhere/reference_page_001.pdf
somewhere/comparison_page_001.pdf
-compose src
somewhereelse/reference_diff_page_001.pdf
Ghostscript
Because of automatically inserted meta data (such as the current date+time), PDF output is not working well for MD5hash-based file comparisons.
If you want to automatically discover all cases which consist of purely white pages, you could also convert to a meta-data free bitmap format using the bmp256
output device. You can do that for the original PDFs (reference and comparison), or for the diff-PDF pages:
gs
-o reference_diff_page_001.bmp
-r72
-g595x842
-sDEVICE=bmp256
reference_diff_page_001.pdf
md5sum reference_diff_page_001.bmp
If the MD5sum is what you expect for an all-white page of 595×842 PostScript points, then your unit test passed.
Update:
I don’t know why I didn’t previously think of generating a histogram output from the ImageMagick compare
…
The following is a command pipeline chaining 2 different commands:
- the first one is the same as the above
compare
which generates the ‘white pixels are equal, red pixels are differences’-format, only it outputs the ImageMagick internalmiff
format. It doesn’t write to a file, but to stdout. - the second one uses
convert
to read stdin, generate a histogram and output the result in text form. There will be two lines:- one indicating the number of white pixels
- the other one indicating the number of red pixels.
Here it goes:
compare
reference.pdf
current.pdf
-compose src
miff:-
|
convert
-
-define histogram:unique-colors=true
-format %c
histogram:info:-
Sample output:
56934: (61937, 0, 7710,52428) #F1F100001E1ECCCC srgba(241,0,30,0.8)
444056: (65535,65535,65535,52428) #FFFFFFFFFFFFCCCC srgba(255,255,255,0.8)
(Sample output was generated by using these reference.pdf and current.pdf files.)
I think this type of output is really well suited for automatic unit testing. If you evaluate the two numbers, you can easily compute the "red pixel" percentage and you could even decide to return PASSED or FAILED based on a certain threshold (if you don’t necessarily need "zero red" for some reason).
I would try this using xpresser – (https://wiki.ubuntu.com/Xpresser ) You can try to match images to similar images not exact copies – which is the problem in these cases.
I don’t know if xpresser is being ctively developed, or if it can be used with stand alone image files (I think so) — anyway it takes its ideas from teh Sikuli project (which is Java with a Jython front end, while xpresser is Python).
I wrote a tool in Python to validate PDFs for my employer’s documentation. It has the capability to compare individual pages to master images. I used a library I found called swftools to export the page to PNG, then used the Python Imaging Library to compare it with the master.
The relevant code looks something like this (this won’t run as there are some dependencies on other parts of the script, but you should get the idea):
# exporting
gfxpdf = gfx.open("pdf", self.pdfpath)
if os.path.isfile(pngPath):
os.remove(pngPath)
page = gfxpdf.getPage(pagenum)
img = gfx.ImageList()
img.startpage(page.width, page.height)
page.render(img)
img.endpage()
img.save(pngPath)
return os.path.isfile(pngPath)
# comparing
outPng = os.path.join(outpath, pngname)
masterPng = os.path.join(outpath, "_master", pngname)
if os.path.isfile(masterPng):
output = Image.open(outPng).convert("RGB") # discard alpha channel, if any
master = Image.open(masterPng).convert("RGB")
mismatch = any(x[1] for x in ImageChops.difference(output, master).getextrema())
"cmppdf" compares either the visual appearance or text content of PDFs.
It is a bash script, downloadable from https://abhweb.org/jima/cmppdf?v
It uses pdftk
and compare
to graphically compare PDFs, similar to what others have described in other answers. Meta data (anything which does not change the actual appearance) is not compared.
The text-comparison option uses pdftotxt
and diff
.