Dual-track submissions to SIGGRAPH have been around for a few years now. During review deliberations its generally assumed that being accepted as a journal publication is better than conference. Journal accepts get relieved of the page limit, and supposedly there are universities that bean-count journal publications as more valueable than conference papers (seriously, though? even as CVPR, ICLR and NeurIPS approach or exceed the impact factors of Science and Nature?). Setting other reasons aside, I wondered if I could get Claude Code to gather data and test whether SIGGRAPH conference publications received fewer (or, spoilers, more!) citations than journal publications.
In the blue block below, I've pasted the site that Claude Code made to summarize the data and statistics gathered by the code it wrote. The bottom contains a link to the per-paper data and the scraping and analysis code is on github. After the blue section, I have some closing reflections.
Before 2022, all SIGGRAPH technical papers were published as journal articles in TOG. Starting in 2022, accepted papers could appear either in a dedicated conference proceedings volume (conference track) or in the TOG issue that DBLP associates with that year's conference (journal track). The split applies to both SIGGRAPH North America (held ~August) and SIGGRAPH Asia (held ~December).
This creates a natural comparison: papers from the same review cycle, same venue, same year — but different publication routes. Any systematic citation difference may reflect visibility effects (open-access proceedings vs. journal), self-selection by authors, or differences in how the community discovers and cites each type.
Paper metadata was retrieved from DBLP for 2022–2025 using the DBLP search API. For each year, four sources were queried: the SIGGRAPH NA conference-track page, the corresponding TOG volume (issue 4), the SIGGRAPH Asia conference-track page, and the corresponding TOG volume (issue 6). Results were deduplicated on (venue, year, title). Non-paper records (editorship entries, proceedings metadata) were filtered out.
Citation counts were fetched from the Semantic Scholar API in batches of 500 DOIs. Of 1904 papers, 1885 (99%) matched a Semantic Scholar record. Counts reflect citations indexed as of 2026-03-18.
Figure 1. Median citation count per group by approximate publication date. The downward trend toward recent years reflects recency, not declining quality.
Figure 2. Box plots of citation counts on a log scale. Boxes show interquartile range; whiskers extend to 1.5× IQR; points are outliers.
A non-parametric rank test comparing conference vs. journal citation distributions within each (venue, year) pair. By testing within-group, this controls for both recency and venue without model assumptions. Effect size is the rank-biserial correlation r: negative values mean conference papers rank higher (more citations). p-values are Benjamini-Hochberg FDR-corrected across all 8 tests.
| Venue | Year | n conf | n journal | Median conf | Median journal | Effect r | p (raw) | p (FDR) |
|---|
Significant results (p < 0.05) shown in red. Positive r = conf papers have higher citations.
A regression model that controls for recency and venue simultaneously across all papers:
log(c+1) = β₀ + β₁·is_conf + β₂·age_years + β₃·is_asia + ε.
age_years is computed from the approximate publication date (NA ≈ August 1,
Asia ≈ December 1), capturing both the year-level recency trend and the ~4-month
within-year offset between venues. The is_asia coefficient therefore reflects
a residual venue effect after timing is fully absorbed.
| Predictor | Coef | SE | t | p | 95% CI |
|---|
At submission, dual-track papers are limited to 7 pages of content (excluding references and up to two optional figures-only pages). Journal-track submissions have no page limit. Accepted dual-track papers appear in conference proceedings with roughly the same length; journal-accepted papers may expand freely. This creates a structural difference in paper length between the two tracks, which could independently affect citation counts — longer papers may present more complete work.
Figure 3. Distribution of page counts per group. Conference-track papers cluster around the 7-page submission limit (typically 8–12 pages including references and optional figures). Journal-track papers are substantially longer with no page cap.
Figure 4. Each point is one paper. Citations on a log scale. Spearman correlations (reported in table below) measure the monotonic relationship between page count and citations within each group.
| Group | n (with page data) | Spearman r | p |
|---|
Spearman r measures monotonic correlation between page count and citation count within each group, ignoring recency or venue differences.
↓ Download papers-with-citations.csv (1904 papers, columns: conference, year, track, title, authors, doi, citations, …)
Hello, again. My main take on this is that this kind of meta study is now exceptionally easy with AI coding agents. In this case, the data collection didn't require much difficult labelling or filtering. The data is fairly easy to inspect for correctness.
As far as the conclusions in this case, it sure seems like conference papers get more citations. This might be because conference papers have more computer vision adjacent content. Or dual-track attracts more exciting work more generally. Nevertheless, it seems that if we treat SIGGRAPH conference papers as a venue, then this venue is headed toward higher citation-impact than TOG. At the end of the day, citation impact is a major factor when choosing a publication venue. Accepting a dual-track paper to journal publication or submitting to journal-only track means associating the paper with a potentially lower citation-impact venue.
Running this arm-chair statistics was a fun distraction and reason to get more familiar with Claude Code. There's a small part of me that thinks we waste too much effort drawing the line between journal and conference. So the findings encourage me to care even less about this distinction. I sort of wish we just had conference papers, and let TOG do its thing separately, but I'm not sure I'd make a strong case for it. Id
Zooming out, conference versus journal seems like the least of our worries as peer-review feels like it's falling apart at the seams. In my limited anecdotal experience, maybe 5-20% of SIGGRAPH submissions are heavily AI-generated (and not good), yet we waste many total human hours reviewing them or dancing around desk-rejecting them. Looking at reviews, it's closer to 20-50% AI-generated (and range between not helpful to superficially helpful). Many SIGGRAPH committee members reported exceptional, new difficulty recruiting volunteer tertiary reviewers (as vision and ML conferences have moved to required reviewing for submitters). Paper acceptance is an increasingly random signal.