<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.8.7">Jekyll</generator><link href="http://cppalliance.org/feed.xml" rel="self" type="application/atom+xml" /><link href="http://cppalliance.org/" rel="alternate" type="text/html" /><updated>2026-05-10T08:02:05+00:00</updated><id>http://cppalliance.org/feed.xml</id><title type="html">The C++ Alliance</title><subtitle>The C++ Alliance is dedicated to helping the C++ programming language evolve. We see it developing as an ecosystem of open source libraries and as a growing community of those who contribute to those libraries..</subtitle><entry><title type="html">MrDocs in the Wild</title><link href="http://cppalliance.org/alan/2026/04/24/Alan.html" rel="alternate" type="text/html" title="MrDocs in the Wild" /><published>2026-04-24T00:00:00+00:00</published><updated>2026-04-24T00:00:00+00:00</updated><id>http://cppalliance.org/alan/2026/04/24/Alan</id><content type="html" xml:base="http://cppalliance.org/alan/2026/04/24/Alan.html">&lt;p&gt;The questions changed. For a long time, people asked about &lt;a href=&quot;https://www.mrdocs.com&quot;&gt;MrDocs&lt;/a&gt; in the abstract: what formats will it support, how will it handle templates, when will it be ready. Then, gradually, the questions became specific. &lt;a href=&quot;https://github.com/jll63&quot;&gt;Jean-Louis Leroy&lt;/a&gt;, the author of &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/openmethod&quot;&gt;Boost.OpenMethod&lt;/a&gt;&lt;/strong&gt;, became one of our most active sources of feedback. His library exercises corners of C++ that most projects never touch, which means MrDocs gets tested in ways we would not have anticipated. He wanted to know why his template specializations were not sorted correctly. He wanted &lt;strong&gt;macro support&lt;/strong&gt; because Boost libraries rely heavily on macros. He hit a &lt;strong&gt;crash&lt;/strong&gt; when his doc comments contained HTML tables. These are not theoretical questions about a tool that might exist someday. These are questions from someone who already generated documentation with MrDocs and needs it to work better.&lt;/p&gt;

&lt;p&gt;In our &lt;a href=&quot;/alan/2025/10/28/Alan.html&quot;&gt;previous post&lt;/a&gt;, we described MrDocs transitioning from prototype to product. This post is about what happened when MrDocs went into the wild.&lt;/p&gt;

&lt;!-- prettier-ignore --&gt;
&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#real-projects-real-problems&quot; id=&quot;markdown-toc-real-projects-real-problems&quot;&gt;Real Projects, Real Problems&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#the-demo-page&quot; id=&quot;markdown-toc-the-demo-page&quot;&gt;The Demo Page&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#breadcrumbs-without-a-navigation-file&quot; id=&quot;markdown-toc-breadcrumbs-without-a-navigation-file&quot;&gt;Breadcrumbs Without a Navigation File&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#coordinating-two-independent-extensions&quot; id=&quot;markdown-toc-coordinating-two-independent-extensions&quot;&gt;Coordinating Two Independent Extensions&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#edge-cases-in-the-wild&quot; id=&quot;markdown-toc-edge-cases-in-the-wild&quot;&gt;Edge Cases in the Wild&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#rendering-and-output&quot; id=&quot;markdown-toc-rendering-and-output&quot;&gt;Rendering and Output&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#under-the-hood&quot; id=&quot;markdown-toc-under-the-hood&quot;&gt;Under the Hood&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-mrdocs-website&quot; id=&quot;markdown-toc-the-mrdocs-website&quot;&gt;The MrDocs Website&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#exploring-the-unknowns&quot; id=&quot;markdown-toc-exploring-the-unknowns&quot;&gt;Exploring the Unknowns&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#reflection-replacing-boilerplate-with-introspection&quot; id=&quot;markdown-toc-reflection-replacing-boilerplate-with-introspection&quot;&gt;Reflection: Replacing Boilerplate with Introspection&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#first-steps-toward-extensions&quot; id=&quot;markdown-toc-first-steps-toward-extensions&quot;&gt;First Steps Toward Extensions&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-we-discarded-mrdocs-as-compiler&quot; id=&quot;markdown-toc-why-we-discarded-mrdocs-as-compiler&quot;&gt;Why We Discarded MrDocs-as-Compiler&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contributor-experience&quot; id=&quot;markdown-toc-contributor-experience&quot;&gt;Contributor Experience&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#automating-pr-reviews&quot; id=&quot;markdown-toc-automating-pr-reviews&quot;&gt;Automating PR Reviews&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ci-infrastructure&quot; id=&quot;markdown-toc-ci-infrastructure&quot;&gt;CI Infrastructure&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#test-infrastructure&quot; id=&quot;markdown-toc-test-infrastructure&quot;&gt;Test Infrastructure&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments-and-reflections&quot; id=&quot;markdown-toc-acknowledgments-and-reflections&quot;&gt;Acknowledgments and Reflections&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;real-projects-real-problems&quot;&gt;Real Projects, Real Problems&lt;/h1&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
mindmap
  root((Feedback))
    First impressions
      Unstyled demos
      Custom stylesheets
    Navigation
      Orphaned pages
      Breadcrumbs
    AST edge cases
      Parameter packs
      Friend targets
      Detail namespaces
    Rendering
      Description ordering
      Code blocks
      Anchor links
    Runtime
      JS engine switch
      Compiler fallback
&lt;/div&gt;

&lt;h2 id=&quot;the-demo-page&quot;&gt;The Demo Page&lt;/h2&gt;

&lt;p&gt;Right after the &lt;a href=&quot;/alan/2025/10/28/Alan.html&quot;&gt;previous post&lt;/a&gt;, where we announced the MVP and encouraged people to try MrDocs, we noticed the &lt;strong&gt;&lt;a href=&quot;https://www.mrdocs.com/demos&quot;&gt;demos page&lt;/a&gt;&lt;/strong&gt; was not doing us any favors. Someone shared MrDocs on a developer community and the website started getting traffic. The landing page looked polished, but visitors clicked through to the demos and saw raw, unstyled HTML: no fonts, no spacing, no colors. The HTML generator produced correct semantic markup, and that is technically the point: users are supposed to customize the output with their own stylesheets. But on the demos page, there was no stylesheet at all, and the result looked broken rather than customizable.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1122&quot;&gt;custom stylesheet system&lt;/a&gt;&lt;/strong&gt; added five configuration options (&lt;code&gt;stylesheets&lt;/code&gt;, &lt;code&gt;linkcss&lt;/code&gt;, &lt;code&gt;copycss&lt;/code&gt;, &lt;code&gt;no-default-styles&lt;/code&gt;, &lt;code&gt;stylesdir&lt;/code&gt;) so projects can match their own branding. A bundled default CSS now ships with MrDocs, and it was &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1101&quot;&gt;refined&lt;/a&gt; to remove gradients in favor of solid, readable backgrounds.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Stylesheet commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5fe30c1&quot;&gt;5fe30c1&lt;/a&gt; feat: custom stylesheets&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/33d985c&quot;&gt;33d985c&lt;/a&gt; chore: version is 0.8.0&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;breadcrumbs-without-a-navigation-file&quot;&gt;Breadcrumbs Without a Navigation File&lt;/h2&gt;

&lt;p&gt;MrDocs generates &lt;strong&gt;thousands of reference pages&lt;/strong&gt;, one per C++ symbol. We maintain an &lt;a href=&quot;https://antora.org/&quot;&gt;Antora&lt;/a&gt; extension, the &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension&quot;&gt;antora-cpp-reference-extension&lt;/a&gt;&lt;/strong&gt;, that integrates these pages into Antora-based documentation sites. But the generated pages end up orphaned from the navigation tree. Users found the navigation confusing: clicking on “boost” in the breadcrumb did not go where expected, and reference pages had no trail showing where they belonged in the hierarchy.&lt;/p&gt;

&lt;p&gt;The obvious fix would be to list every page in Antora’s &lt;code&gt;nav.adoc&lt;/code&gt;, but maintaining a navigation file with thousands of entries that changes every time a symbol is added or removed is not practical. Worse, Antora renders the navigation file in the &lt;strong&gt;sidebar&lt;/strong&gt;, so listing every reference page would flood the UI with thousands of entries. We discussed the problem extensively with the &lt;a href=&quot;https://antora.org/&quot;&gt;Antora&lt;/a&gt; maintainer on the &lt;a href=&quot;https://antora.zulipchat.com/&quot;&gt;Antora community chat&lt;/a&gt;. His position was clear: Antora was designed so that pages must be in the navigation file. Programmatic editing of navigation is not supported.&lt;/p&gt;

&lt;p&gt;That was not acceptable for us. We needed breadcrumbs that work for thousands of generated pages without polluting the sidebar or requiring a hand-maintained navigation file. The Antora author’s position was reasonable from his perspective (Antora is a general-purpose documentation tool, not a reference generator), but our use case was fundamentally different from what Antora was designed for.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension&quot;&gt;antora-cpp-reference-extension&lt;/a&gt;&lt;/strong&gt; now &lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/ae95eb2&quot;&gt;builds breadcrumbs independently&lt;/a&gt; from the navigation file. MrDocs generates reference pages in a directory structure that mirrors the C++ namespace hierarchy (&lt;code&gt;boost/urls/segments_view.adoc&lt;/code&gt; lives inside &lt;code&gt;boost/urls/&lt;/code&gt;). The extension uses this structure to reconstruct the breadcrumb trail: each directory maps to a namespace, and the page title (which is the symbol name) becomes the last breadcrumb entry. The result reads naturally: &lt;strong&gt;Reference &amp;gt; boost &amp;gt; urls &amp;gt; segments_view&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Zero changes to the nav file. The sidebar stays clean. Breadcrumbs appear automatically and update when symbols are added or removed.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Breadcrumb and reference extension commits&lt;/summary&gt;

  &lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension&quot;&gt;antora-cpp-reference-extension&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/ae95eb2&quot;&gt;ae95eb2&lt;/a&gt; feat: synthesize reference breadcrumbs without nav files&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/10a4019&quot;&gt;10a4019&lt;/a&gt; feat: add auto base URL detection&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/6a6c08b&quot;&gt;6a6c08b&lt;/a&gt; docs: auto-base-url option&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/4f7c79f&quot;&gt;4f7c79f&lt;/a&gt; refactor: enhance release asset validation&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;coordinating-two-independent-extensions&quot;&gt;Coordinating Two Independent Extensions&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension&quot;&gt;antora-cpp-reference-extension&lt;/a&gt;&lt;/strong&gt; generates reference pages and breadcrumbs. The &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-tagfiles-extension&quot;&gt;antora-cpp-tagfiles-extension&lt;/a&gt;&lt;/strong&gt; resolves cross-library symbol links (so a reference to &lt;code&gt;boost::system::error_code&lt;/code&gt; in Boost.URL’s docs links to the correct page in Boost.System’s docs). These are &lt;strong&gt;two independent Antora extensions&lt;/strong&gt; running as separate jobs.&lt;/p&gt;

&lt;p&gt;The problem was that the reference extension generates &lt;strong&gt;tagfiles&lt;/strong&gt; as a side effect of producing reference pages, and the tagfiles extension needs the &lt;strong&gt;most recent version&lt;/strong&gt; of those tagfiles to resolve links correctly. MrDocs changes the tagfiles every time the corpus changes. Manually keeping them in sync was not sustainable: committing tagfiles to the repository meant they were always stale by the time the next build ran.&lt;/p&gt;

&lt;p&gt;We made the extensions &lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/6e8ffcb&quot;&gt;coordinate directly&lt;/a&gt;. The reference extension now hands its tagfile to the tagfiles extension at build time, so the links always reflect the current state of the documentation. The reference extension also gained &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/10a4019&quot;&gt;auto base URL detection&lt;/a&gt;&lt;/strong&gt;, removing the need for manual path configuration when switching between development and production builds.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Extension coordination commits&lt;/summary&gt;

  &lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension&quot;&gt;antora-cpp-reference-extension&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/6e8ffcb&quot;&gt;6e8ffcb&lt;/a&gt; feat: antora-cpp-tagfiles-extension coordination&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-reference-extension/commit/8f12576&quot;&gt;8f12576&lt;/a&gt; chore: version is 0.1.0&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-tagfiles-extension&quot;&gt;antora-cpp-tagfiles-extension&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-tagfiles-extension/commit/98eba40&quot;&gt;98eba40&lt;/a&gt; feat: antora-cpp-reference-extension coordination&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-tagfiles-extension/commit/5a1723c&quot;&gt;5a1723c&lt;/a&gt; feat: add global log level control for missing symbols&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/antora-cpp-tagfiles-extension/commit/453f01b&quot;&gt;453f01b&lt;/a&gt; chore: version is 0.1.0&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;edge-cases-in-the-wild&quot;&gt;Edge Cases in the Wild&lt;/h2&gt;

&lt;p&gt;As more libraries adopted MrDocs, edge cases in C++ symbol extraction surfaced. &lt;a href=&quot;https://github.com/boostorg/beast&quot;&gt;Boost.Beast&lt;/a&gt; exposed a &lt;strong&gt;duplicate ellipsis&lt;/strong&gt; in parameter pack rendering (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1108&quot;&gt;#1108&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1129&quot;&gt;#1129&lt;/a&gt;):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt; &lt;code&gt;T&amp;amp; emplace(Args...&amp;amp;&amp;amp;... args)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt; &lt;code&gt;T&amp;amp; emplace(Args&amp;amp;&amp;amp;... args)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/boostorg/openmethod&quot;&gt;Boost.OpenMethod&lt;/a&gt; revealed that &lt;strong&gt;friend targets&lt;/strong&gt; were not resolving correctly. &lt;a href=&quot;https://github.com/cppalliance/buffers&quot;&gt;Boost.Buffers&lt;/a&gt; uncovered a problem with &lt;strong&gt;detail namespaces&lt;/strong&gt;: when a class inherits from a base in a hidden namespace, the inherited members appeared in the documentation but their doc comments were lost (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1107&quot;&gt;#1107&lt;/a&gt;). We &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1109&quot;&gt;fixed this&lt;/a&gt; so derived classes inherit documentation from hidden bases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unnamed structs&lt;/strong&gt; also sparked an extended design discussion. When C++ code declares &lt;code&gt;constexpr struct {} f{};&lt;/code&gt;, MrDocs needs a &lt;strong&gt;stable, unique name&lt;/strong&gt; for hyperlinks. The team established a collaborative design process using shared documents, with &lt;a href=&quot;https://github.com/pdimov&quot;&gt;Peter Dimov&lt;/a&gt; contributing an insight about C compatibility (&lt;code&gt;typedef struct {} T;&lt;/code&gt; makes the struct named in C++).&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;AST and metadata commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c85be75&quot;&gt;c85be75&lt;/a&gt; fix: remove duplicate ellipsis in parameter pack expansion&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c3dbded&quot;&gt;c3dbded&lt;/a&gt; fix(ast): prevent TU parent from including unmatched globals&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/76b7b43&quot;&gt;76b7b43&lt;/a&gt; fix(ast): canonicalize friend targets&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/05f5852&quot;&gt;05f5852&lt;/a&gt; fix(metadata): copy impl-defined base docs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/35cf1f6&quot;&gt;35cf1f6&lt;/a&gt; fix: UsingSymbol is SymbolParent&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c406d57&quot;&gt;c406d57&lt;/a&gt; fix: preserve extraction mode when copying members from derived classes&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4e7ef04&quot;&gt;4e7ef04&lt;/a&gt; fix: prevent infinite recursion when extracting non-regular base class&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0a69301&quot;&gt;0a69301&lt;/a&gt; fix: extract and fix some special member function helpers&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;rendering-and-output&quot;&gt;Rendering and Output&lt;/h2&gt;

&lt;p&gt;Users noticed that the &lt;strong&gt;manual description&lt;/strong&gt; of a symbol was buried below long member tables (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1105&quot;&gt;#1105&lt;/a&gt;). On a class with many members, you had to scroll past the entire member listing before finding the author’s explanation of what the class does. We moved the description to appear &lt;strong&gt;immediately after the synopsis&lt;/strong&gt;, matching what &lt;a href=&quot;https://en.cppreference.com/&quot;&gt;cppreference&lt;/a&gt; does.&lt;/p&gt;

&lt;p&gt;Other rendering issues included HTML code blocks not wrapped in &lt;code&gt;&amp;lt;pre&amp;gt;&lt;/code&gt; tags, &lt;strong&gt;anchor links&lt;/strong&gt; appearing when the wrapper element was missing, and the Handlebars template engine accumulating &lt;strong&gt;special name re-mappings&lt;/strong&gt; that conflated different symbols.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Rendering and output commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d90eae6&quot;&gt;d90eae6&lt;/a&gt; fix: hide anchor links when wrapper is not included&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/92491de&quot;&gt;92491de&lt;/a&gt; fix: manual description comes before member lists&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/58bf524&quot;&gt;58bf524&lt;/a&gt; fix: remove all special name re-mappings for Handlebars&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2a75692&quot;&gt;2a75692&lt;/a&gt; fix: HTML code blocks not wrapped in pre tags&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1ebff32&quot;&gt;1ebff32&lt;/a&gt; fix: bottomUpTraverse() skips ListBlock items&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7b118b1&quot;&gt;7b118b1&lt;/a&gt; fix: missing @return command in doc comment&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;under-the-hood&quot;&gt;Under the Hood&lt;/h2&gt;

&lt;p&gt;We fixed a &lt;strong&gt;compiler fallback&lt;/strong&gt; issue where MrDocs failed when the compilation database referenced a compiler that was not available on the current machine, and corrected &lt;strong&gt;sanitizer flag propagation&lt;/strong&gt; so that UBSan and TSan do not unnecessarily propagate to dependency builds.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Build and toolchain commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/235f5c8&quot;&gt;235f5c8&lt;/a&gt; fix: fall back to system compilers when database compiler is unavailable&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f320581&quot;&gt;f320581&lt;/a&gt; fix: don’t pass sanitizer to dependency builds for UBSan/TSan&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;the-mrdocs-website&quot;&gt;The MrDocs Website&lt;/h2&gt;

&lt;p&gt;While we were fixing the generated output, &lt;strong&gt;&lt;a href=&quot;https://github.com/rbbeeston&quot;&gt;Robert Beeston&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href=&quot;https://github.com/julioest&quot;&gt;Julio Estrada&lt;/a&gt;&lt;/strong&gt; were redesigning the &lt;a href=&quot;https://www.mrdocs.com&quot;&gt;MrDocs website&lt;/a&gt;. Robert led the design direction, working with a team to develop a visual identity that balances a distinctive retro aesthetic with modern readability, including a dark theme. Julio handled the implementation: &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1032&quot;&gt;mobile-responsive layout&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1050&quot;&gt;UI styling improvements&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/86ce271&quot;&gt;cleaner backgrounds and styles&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1075&quot;&gt;Open Graph and Twitter meta tags&lt;/a&gt; for social sharing, and a &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1033&quot;&gt;close button for the docs navigation&lt;/a&gt; on smaller screens.&lt;/p&gt;

&lt;p&gt;For a documentation tool, the website is the first thing potential users see. Having a polished, memorable landing page matters more than it might for other kinds of projects.&lt;/p&gt;

&lt;h1 id=&quot;exploring-the-unknowns&quot;&gt;Exploring the Unknowns&lt;/h1&gt;

&lt;p&gt;The team made a deliberate choice: instead of following a &lt;strong&gt;traditional feature roadmap&lt;/strong&gt;, we would focus on &lt;strong&gt;areas of uncertainty&lt;/strong&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1113&quot;&gt;#1113&lt;/a&gt;). These were &lt;strong&gt;open questions&lt;/strong&gt; that blocked multiple design decisions at once:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;MrDocs-as-compiler&lt;/strong&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1073&quot;&gt;#1073&lt;/a&gt;): should MrDocs emit “object” files for later “linking,” like a compiler?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Scripting extensions&lt;/strong&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1128&quot;&gt;#1128&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/881&quot;&gt;#881&lt;/a&gt;): how should users extend and transform documentation output?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Plugins&lt;/strong&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/58&quot;&gt;#58&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1044&quot;&gt;#1044&lt;/a&gt;): how should third-party code register new generators?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;JSON-only MrDocs&lt;/strong&gt;: should we add a JSON output format alongside (or replacing) the existing XML structured output?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Reflection&lt;/strong&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1114&quot;&gt;#1114&lt;/a&gt;): how do we reduce the maintenance burden of the growing metadata model?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cross-linking&lt;/strong&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1072&quot;&gt;#1072&lt;/a&gt;): how do we reference symbols in other libraries?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The motivation was practical. Each Boost library that adopted MrDocs had its own needs that could not be met by the core tool alone. &lt;a href=&quot;https://github.com/boostorg/url&quot;&gt;Boost.URL&lt;/a&gt; has &lt;code&gt;implementation_defined&lt;/code&gt; namespaces with internal code that should be hidden or transformed in the documentation. &lt;a href=&quot;https://github.com/cppalliance/capy&quot;&gt;Boost.Capy&lt;/a&gt; has detail types that should be presented as user-facing types. Coroutines are represented as types in the AST but should be documented as functions. We want MrDocs to be smart enough, with project-specific extensions, that library authors do not have to do workarounds in the source code just to get the documentation right.&lt;/p&gt;

&lt;p&gt;Rather than hard-coding solutions for each library, the unknowns framework asked: what general mechanisms would let every library solve its own documentation problems?&lt;/p&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
mindmap
  root((Unknowns))
    Scripting extensions
      JS helpers
      Lua
    Plugins
      Generator API
      DLL loading
    Reflection
      Boost.Describe
      MrDocs.Describe
    Cross-linking
      Tagfiles
      Antora coordination
    JSON-only MrDocs
    MrDocs-as-compiler
&lt;/div&gt;

&lt;h2 id=&quot;reflection-replacing-boilerplate-with-introspection&quot;&gt;Reflection: Replacing Boilerplate with Introspection&lt;/h2&gt;

&lt;p&gt;MrDocs models many kinds of C++ symbols: functions, classes, namespaces, enums, typedefs, concepts, and more. Each symbol type has metadata, and every piece of code that touches that metadata had to &lt;strong&gt;enumerate all fields by hand&lt;/strong&gt;. Adding a single field to a symbol type meant updating it in:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Schema files&lt;/strong&gt; that describe the metadata format&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Generators&lt;/strong&gt; (HTML, AsciiDoc, XML) that produce the output&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Templates&lt;/strong&gt; that render individual pages&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Operators&lt;/strong&gt; like comparison functions, merge functions (e.g., merging symbols from different translation units when only one is documented), and equality checks&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Documentation&lt;/strong&gt; describing the metadata&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The code itself&lt;/strong&gt; that populates and transforms the metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is roughly &lt;strong&gt;ten to fifteen places&lt;/strong&gt; per field, and missing one caused CI failures that blocked everyone. This was one of the &lt;strong&gt;unknowns&lt;/strong&gt; we identified: how to reduce the maintenance burden as the data model grows. Worse, downstream users who had their own templates and extensions also had to learn about the new fields and update everything accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/gennaroprota&quot;&gt;Gennaro Prota&lt;/a&gt;&lt;/strong&gt;, with his strong background in generic programming and metaprogramming, took ownership of the reflection problem. The work progressed through several stages:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1130&quot;&gt;Integrate Boost.Describe&lt;/a&gt; into the metadata system, replacing hand-written serialization functions&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1153&quot;&gt;Add &lt;code&gt;$meta.type&lt;/code&gt; and &lt;code&gt;$meta.bases&lt;/code&gt;&lt;/a&gt; to all DOM objects so templates can introspect the corpus&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1151&quot;&gt;Replace the XML generator&lt;/a&gt; with a reflection-based one (no more hand-maintained XML output)&lt;/li&gt;
  &lt;li&gt;Build a &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1171&quot;&gt;custom reflection system (MrDocs.Describe)&lt;/a&gt; tailored to our needs&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1177&quot;&gt;Replace per-type operators&lt;/a&gt; with a single generic template&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result &lt;strong&gt;eliminated the second step entirely&lt;/strong&gt;: adding a new field to a symbol type no longer requires touching ten other files. The description drives everything, and the serialization, comparison, and merge logic derive from it automatically. &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/describe/&quot;&gt;Boost.Describe&lt;/a&gt; and &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/mp11/&quot;&gt;Boost.Mp11&lt;/a&gt; are private dependencies that do not appear in public headers.&lt;/p&gt;

&lt;p&gt;Along the way, Gennaro also added &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1163&quot;&gt;function object support&lt;/a&gt;&lt;/strong&gt;, fixed &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1157&quot;&gt;Markdown inline formatting&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1173&quot;&gt;missing dependent array bounds&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Reflection and metadata commits&lt;/summary&gt;

  &lt;p&gt;&lt;strong&gt;Reflection (Gennaro Prota)&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d490880&quot;&gt;d490880&lt;/a&gt; refactor(metadata): integrate Boost.Describe&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c4dd89a&quot;&gt;c4dd89a&lt;/a&gt; feat: add $meta.type and $meta.bases to all DOM objects&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d4a64ef&quot;&gt;d4a64ef&lt;/a&gt; fix: replace the XML generator with a reflection-based one&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6ce961f&quot;&gt;6ce961f&lt;/a&gt; refactor: add custom reflection facilities (MrDocs.Describe)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/eb68494&quot;&gt;eb68494&lt;/a&gt; refactor: migrate all reflection consumers to MrDocs.Describe&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8f5391b&quot;&gt;8f5391b&lt;/a&gt; refactor: replace per-type merge() one-liners with a single generic template&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/e749144&quot;&gt;e749144&lt;/a&gt; feat: make the reflection consumers public&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1ed76ad&quot;&gt;1ed76ad&lt;/a&gt; refactor: replace most per-type tag_invoke overloads with a single generic template&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0246935&quot;&gt;0246935&lt;/a&gt; refactor: replace per-type operator==() and operator&amp;lt;=&amp;gt;() with a single generic template&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;Features and fixes (Gennaro Prota)&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/93a5032&quot;&gt;93a5032&lt;/a&gt; feat: add function object support&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f35ebcd&quot;&gt;f35ebcd&lt;/a&gt; fix: rendering of Markdown inline formatting and bullet lists&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4ae305b&quot;&gt;4ae305b&lt;/a&gt; fix: missing dependent array bounds in the output&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/72fba40&quot;&gt;72fba40&lt;/a&gt; test: add golden tests for a partial class template specialization&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;blockquote&gt;
  &lt;p&gt;The reflection work is the foundation for everything that comes next: the extension system, the upcoming Lua scripting, and the metadata transformation pipeline.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;first-steps-toward-extensions&quot;&gt;First Steps Toward Extensions&lt;/h2&gt;

&lt;p&gt;MrDocs supports two extension points: &lt;strong&gt;JavaScript&lt;/strong&gt; for Handlebars template helpers, and &lt;strong&gt;Lua&lt;/strong&gt; for more powerful scripting. The JavaScript engine had been &lt;a href=&quot;https://duktape.org/&quot;&gt;Duktape&lt;/a&gt;, but Duktape is no longer actively maintained and only supports ES5.1. We needed a replacement.&lt;/p&gt;

&lt;p&gt;We evaluated several alternatives (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/881&quot;&gt;#881&lt;/a&gt;):&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Engine&lt;/th&gt;
      &lt;th&gt;JS Support&lt;/th&gt;
      &lt;th&gt;Windows/MSVC&lt;/th&gt;
      &lt;th&gt;Size&lt;/th&gt;
      &lt;th&gt;License&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://github.com/bellard/quickjs&quot;&gt;QuickJS&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;ES2023&lt;/td&gt;
      &lt;td&gt;No (clang-cl only)&lt;/td&gt;
      &lt;td&gt;~370 KB&lt;/td&gt;
      &lt;td&gt;MIT&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://github.com/lynx-family/primjs&quot;&gt;PrimJS&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;ES2019&lt;/td&gt;
      &lt;td&gt;No (POSIX only)&lt;/td&gt;
      &lt;td&gt;~370 KB&lt;/td&gt;
      &lt;td&gt;MIT&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://jerryscript.net/&quot;&gt;JerryScript&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;ES5.1 + ES2022 subset&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;~200 KB&lt;/td&gt;
      &lt;td&gt;Apache 2.0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://github.com/Samsung/escargot&quot;&gt;Escargot&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;ES2025 subset&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;~400-500 KB&lt;/td&gt;
      &lt;td&gt;LGPL 2.1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://github.com/ArtifexSoftware/mujs&quot;&gt;MuJS&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;ES5.1&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;~200-300 KB&lt;/td&gt;
      &lt;td&gt;ISC&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://github.com/Moddable-OpenSource/moddable&quot;&gt;Moddable XS&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;ES2025 (~99%)&lt;/td&gt;
      &lt;td&gt;Yes (via SDK)&lt;/td&gt;
      &lt;td&gt;~100-300 KB&lt;/td&gt;
      &lt;td&gt;Apache/GPL/LGPL&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://github.com/cesanta/mjs&quot;&gt;mJS&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;Restricted ES6&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;~50-60 KB&lt;/td&gt;
      &lt;td&gt;GPL 2.0 / Commercial&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://github.com/cesanta/elk&quot;&gt;Elk&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;Minimal ES6&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;~20-30 KB&lt;/td&gt;
      &lt;td&gt;GPL 2.0 / Commercial&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;We first experimented with &lt;strong&gt;QuickJS&lt;/strong&gt;, which had the best ES support. But it requires C11 features like &lt;code&gt;&amp;lt;stdatomic.h&amp;gt;&lt;/code&gt; and &lt;code&gt;__int128&lt;/code&gt; that plain MSVC does not support. On Windows, users would need Clang with the Visual Studio runtime. &lt;strong&gt;PrimJS&lt;/strong&gt; was POSIX-only. We settled on &lt;strong&gt;&lt;a href=&quot;https://jerryscript.net/&quot;&gt;JerryScript&lt;/a&gt;&lt;/strong&gt;: it supports Windows and MSVC natively, has a small footprint (~200 KB), and covers enough of ES2022 for template helpers. Unlike most alternatives in the table, JerryScript was designed from the ground up to be &lt;strong&gt;embedded&lt;/strong&gt; in other applications, which makes it more like &lt;a href=&quot;https://www.lua.org/&quot;&gt;Lua&lt;/a&gt; and less like engines that target browsers or standalone runtimes.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1126&quot;&gt;JavaScript helpers extension&lt;/a&gt;&lt;/strong&gt; was a single commit but a large one: &lt;strong&gt;85 files changed, 4,287 insertions&lt;/strong&gt;. The work included:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Replacing Duktape with JerryScript&lt;/strong&gt; across the entire codebase, including build scripts, CMake recipes, and third-party patches&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Rewriting the C++ JavaScript bindings&lt;/strong&gt; (&lt;code&gt;JavaScript.hpp&lt;/code&gt; and &lt;code&gt;JavaScript.cpp&lt;/code&gt;) with shared context lifetime, safer value accessors, and clearer error messages&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;A layered addon system&lt;/strong&gt; where projects provide JavaScript helpers in a directory structure (&lt;code&gt;generator/common/helpers/&lt;/code&gt; for shared helpers, &lt;code&gt;generator/html/helpers/&lt;/code&gt; for format-specific ones). Multiple addon directories can be layered, so a project’s helpers override or extend the defaults.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Golden tests&lt;/strong&gt; for extension output (&lt;code&gt;js-helper/&lt;/code&gt;, &lt;code&gt;js-helper-layering/&lt;/code&gt;) to verify that helpers produce the expected documentation&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;1,335 lines of new JavaScript binding tests&lt;/strong&gt; covering the engine lifecycle, value conversion, error handling, and helper registration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Combined with the &lt;strong&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1139&quot;&gt;public API for registering custom generators&lt;/a&gt;&lt;/strong&gt;, MrDocs now supports customization beyond templates. A library like &lt;a href=&quot;https://develop.capy.cpp.al/capy/reference/boost/capy.html&quot;&gt;Boost.Capy&lt;/a&gt; could write an extension that transforms its coroutine types into function documentation, without any changes to MrDocs itself.&lt;/p&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
flowchart LR
    A[Clang AST] --&amp;gt; B[Extraction]
    B --&amp;gt; C[Corpus]
    C --&amp;gt; D[Transformation Extensions]
    D --&amp;gt; E[Handlebars Generators]
    E --&amp;gt; F[Documentation Templates]
    F --&amp;gt; H[HTML / AsciiDoc]
    F --&amp;gt; G[Template Extensions]
    G --&amp;gt; F
    D -.-&amp;gt; I[XML]
&lt;/div&gt;

&lt;p&gt;The vision for extensions has two layers:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Transformation extensions&lt;/strong&gt; operate on the corpus between extraction and generation. A library could transform its internal types into the documentation structure it wants. This layer is not yet implemented.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Template extensions&lt;/strong&gt; (JavaScript helpers) operate inside the Handlebars templates that produce HTML and AsciiDoc output. This is the layer we shipped.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Lua scripts&lt;/strong&gt; for more powerful scripting in both layers&lt;/li&gt;
&lt;/ul&gt;

&lt;details&gt;
  &lt;summary&gt;Extension and generator commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0f3ecb4&quot;&gt;0f3ecb4&lt;/a&gt; feat: javascript helpers extension (85 files, 4,287 insertions)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/930a5ea&quot;&gt;930a5ea&lt;/a&gt; fix: jerry_port_context_free wrong signature causes silent corruption&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8da0930&quot;&gt;8da0930&lt;/a&gt; feat(lib): public API for generator registration&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/788c1ba&quot;&gt;788c1ba&lt;/a&gt; feat(generators): tables for symbols have headers&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;why-we-discarded-mrdocs-as-compiler&quot;&gt;Why We Discarded MrDocs-as-Compiler&lt;/h2&gt;

&lt;p&gt;One unknown we explored and &lt;strong&gt;deliberately discarded&lt;/strong&gt; was &lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1073&quot;&gt;MrDocs-as-compiler&lt;/a&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1073&quot;&gt;#1073&lt;/a&gt;). The idea, proposed by &lt;a href=&quot;https://github.com/pdimov&quot;&gt;Peter Dimov&lt;/a&gt;, was to treat MrDocs like a compiler: emit “object” files per translation unit, then “link” them to produce the final reference. &lt;a href=&quot;https://cmake.org/&quot;&gt;CMake&lt;/a&gt; would invoke MrDocs as if it were &lt;a href=&quot;https://clang.llvm.org/&quot;&gt;Clang&lt;/a&gt;, with identical command-line options.&lt;/p&gt;

&lt;p&gt;We spent time studying tools that work this way: &lt;a href=&quot;https://clang.llvm.org/extra/clang-tidy/&quot;&gt;clang-tidy&lt;/a&gt;, &lt;a href=&quot;https://clang.llvm.org/extra/clang-doc/&quot;&gt;clang-doc&lt;/a&gt;, &lt;a href=&quot;https://include-what-you-use.org/&quot;&gt;include-what-you-use&lt;/a&gt;. What we found is that &lt;strong&gt;tricking CMake into thinking MrDocs is a real compiler&lt;/strong&gt; is not trivial. Every tool that tries this approach ends up needing either a coordinator binary (reimplementing what MrDocs already has) or CMake helper scripts. Both add workflows rather than simplifying them.&lt;/p&gt;

&lt;p&gt;The experience from the Boost ecosystem reinforced this: no Boost project uses any of these compiler-like tools for static analysis, and the reason is complexity. People who find the compilation database workflow too involved are going to be even less inclined to adopt a tool that requires them to pretend to be a compiler. We decided to keep MrDocs as a &lt;strong&gt;single-step tool&lt;/strong&gt; that reads a compilation database and produces output, rather than splitting it into a multi-binary pipeline that would need its own coordination layer.&lt;/p&gt;

&lt;h1 id=&quot;contributor-experience&quot;&gt;Contributor Experience&lt;/h1&gt;

&lt;p&gt;As more people contributed to MrDocs, the gap between “clone the repo” and “submit a useful PR” needed closing. The biggest change was the &lt;strong&gt;&lt;a href=&quot;/alan/2026/04/15/Alan.html&quot;&gt;bootstrap script&lt;/a&gt;&lt;/strong&gt;, which reduced the entire build setup to a single &lt;code&gt;python bootstrap.py&lt;/code&gt; command (covered in a &lt;a href=&quot;/alan/2026/04/15/Alan.html&quot;&gt;separate post&lt;/a&gt;). Beyond the bootstrap, we &lt;strong&gt;split the contributor guide&lt;/strong&gt; into focused sections, added &lt;strong&gt;reference documentation for MrDocs comment syntax&lt;/strong&gt; (so contributors know what &lt;code&gt;@copydoc&lt;/code&gt;, &lt;code&gt;@see&lt;/code&gt;, and other commands do), and created a &lt;strong&gt;&lt;code&gt;run_all_tests&lt;/code&gt; script&lt;/strong&gt; that runs the full test suite locally without needing to understand the CMake test configuration.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Onboarding commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b103cba&quot;&gt;b103cba&lt;/a&gt; docs(reference): mrdocs comments&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9b7ec24&quot;&gt;9b7ec24&lt;/a&gt; feat(util): run_all_tests script&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5902699&quot;&gt;5902699&lt;/a&gt; docs: update packages&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/302f0a6&quot;&gt;302f0a6&lt;/a&gt; docs: split contribute.adoc guide&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;automating-pr-reviews&quot;&gt;Automating PR Reviews&lt;/h2&gt;

&lt;p&gt;MrDocs PRs tend to be &lt;strong&gt;large and hard to review&lt;/strong&gt;. A single PR might touch the AST visitor, the Handlebars templates, the Antora extension, the CI configuration, and hundreds of &lt;strong&gt;golden test files&lt;/strong&gt; (when an intentional change to the output format updates the expected output for every test case). We found ourselves making the same review comments over and over.&lt;/p&gt;

&lt;p&gt;We set up &lt;strong&gt;&lt;a href=&quot;https://danger.systems/js/&quot;&gt;Danger.js&lt;/a&gt;&lt;/strong&gt; to catch these patterns before human reviewers see the PR. The most important check is &lt;strong&gt;detecting when source code changes do not include corresponding tests&lt;/strong&gt;: if someone changes extraction logic but does not update the golden tests, or changes a template without updating the expected output, Danger flags it. Beyond that:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Categorizes&lt;/strong&gt; all file changes into scopes (source, tests, golden-tests, docs, CI, build, tooling) and generates a &lt;strong&gt;summary table&lt;/strong&gt; showing churn per scope&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Validates&lt;/strong&gt; commit messages against &lt;a href=&quot;https://www.conventionalcommits.org/&quot;&gt;Conventional Commits&lt;/a&gt; format&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Warns&lt;/strong&gt; when a single commit exceeds &lt;strong&gt;2,000 lines&lt;/strong&gt; of source churn (encouraging smaller, reviewable slices)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Flags&lt;/strong&gt; mismatched commit types (e.g., a &lt;code&gt;feat:&lt;/code&gt; commit that only touches test files suggests &lt;code&gt;test:&lt;/code&gt; instead)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Rejects&lt;/strong&gt; PR descriptions under 40 characters&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Ignores&lt;/strong&gt; the test check for refactor-only PRs where the tests are expected to remain unchanged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even when there are no warnings, the &lt;strong&gt;scope summary table&lt;/strong&gt; gives reviewers an immediate sense of what a large PR touches. On a PR that changes 500 lines of source and 3,000 lines of golden tests, the table makes it clear that the bulk of the diff is expected test output, not new logic.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Danger.js commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6f5f6e9&quot;&gt;6f5f6e9&lt;/a&gt; ci: setup danger.js&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5429b2e&quot;&gt;5429b2e&lt;/a&gt; ci(danger): align report table and add top-files summary&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/240921d&quot;&gt;240921d&lt;/a&gt; ci(danger): split PR target ci workflows&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/08c46b6&quot;&gt;08c46b6&lt;/a&gt; ci(danger): correct file delta calculation in reports&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2cfd081&quot;&gt;2cfd081&lt;/a&gt; ci(danger): adjust large commit threshold&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/71845c8&quot;&gt;71845c8&lt;/a&gt; ci(danger): map root files into explicit scopes&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/17b0a57&quot;&gt;17b0a57&lt;/a&gt; ci(danger): ignore test check for refactor-only PRs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6481fd3&quot;&gt;6481fd3&lt;/a&gt; ci(danger): simplify CI naming&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fd7d248&quot;&gt;fd7d248&lt;/a&gt; ci(danger): omit empty sections from report&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7502961&quot;&gt;7502961&lt;/a&gt; ci(danger): categorize util/bootstrap as build scope&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/57e191e&quot;&gt;57e191e&lt;/a&gt; ci(danger): better markdown format&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;ci-infrastructure&quot;&gt;CI Infrastructure&lt;/h2&gt;

&lt;p&gt;We integrated &lt;strong&gt;&lt;a href=&quot;https://codecov.io/&quot;&gt;Codecov&lt;/a&gt;&lt;/strong&gt; for tracking test coverage across PRs and switched from GCC to &lt;strong&gt;Clang for coverage&lt;/strong&gt; (more accurate AST-based measurement). CI speed was a recurring concern: we &lt;strong&gt;skipped remote documentation generation on PRs&lt;/strong&gt;, &lt;strong&gt;sped up release demos&lt;/strong&gt;, and &lt;strong&gt;skipped long tests&lt;/strong&gt; that were not catching new bugs. LLVM cache keys were &lt;strong&gt;unified&lt;/strong&gt; to avoid redundant builds, and CTest timeouts were increased for sanitizer jobs that run significantly slower. &lt;strong&gt;&lt;a href=&quot;https://github.com/mizvekov&quot;&gt;Matheus Izvekov&lt;/a&gt;&lt;/strong&gt; contributed the &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1144&quot;&gt;Clang coverage switch&lt;/a&gt;, fixed an &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1132&quot;&gt;infinite recursion in extraction&lt;/a&gt;, and moved the project to &lt;a href=&quot;https://github.com/cppalliance/mrdocs/pull/1077&quot;&gt;use system libs by default&lt;/a&gt;.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;CI infrastructure commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ed6b3bc&quot;&gt;ed6b3bc&lt;/a&gt; ci: add codecov configuration&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5426a0a&quot;&gt;5426a0a&lt;/a&gt; ci: use clang for coverage&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d629173&quot;&gt;d629173&lt;/a&gt; fix(ci): unify redundant LLVM cache keys&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/36a3b51&quot;&gt;36a3b51&lt;/a&gt; ci: update actions to v1.9.1&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7b2103a&quot;&gt;7b2103a&lt;/a&gt; ci: increase CTest timeout for MSan jobs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/086becc&quot;&gt;086becc&lt;/a&gt; ci: increase the ctest timeout to 9000&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/adb6821&quot;&gt;adb6821&lt;/a&gt; ci(cpp-matrix): remove the optimized-debug factor&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9507a38&quot;&gt;9507a38&lt;/a&gt; ci: simplify CI workflow and upgrade cpp-actions to @develop&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9a5bd3c&quot;&gt;9a5bd3c&lt;/a&gt; ci: skip remote documentation generation on PRs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/637011f&quot;&gt;637011f&lt;/a&gt; ci: detect and report demo generation failures&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/084322d&quot;&gt;084322d&lt;/a&gt; ci: speed up release demos on PRs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/471951d&quot;&gt;471951d&lt;/a&gt; ci: skip long tests to speed up CI&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a5f160b&quot;&gt;a5f160b&lt;/a&gt; ci: increase test coverage for the new XML generator&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b1fc43c&quot;&gt;b1fc43c&lt;/a&gt; ci: exclude Reflection.hpp from coverage&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a1f9a82&quot;&gt;a1f9a82&lt;/a&gt; ci: accept any g++-14 version&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c136a46&quot;&gt;c136a46&lt;/a&gt; ci(website): preserve roadmap directory during deployment&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4763d86&quot;&gt;4763d86&lt;/a&gt; revert(ci): remove premature roadmap report step&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3462996&quot;&gt;3462996&lt;/a&gt; ci: revert coverage changes&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8b2c3e9&quot;&gt;8b2c3e9&lt;/a&gt; ci: align llvm-sanitizer-config with archive basename&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fdff573&quot;&gt;fdff573&lt;/a&gt; ci: gitignore CI node_modules&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/757d446&quot;&gt;757d446&lt;/a&gt; fix(ci): update the fmt branch reference from master to main&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a3366b0&quot;&gt;a3366b0&lt;/a&gt; fix(ci): name rolling release packages after the branch&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;test-infrastructure&quot;&gt;Test Infrastructure&lt;/h2&gt;

&lt;p&gt;MrDocs uses &lt;strong&gt;golden tests&lt;/strong&gt;: the expected output for every test case is stored as a file, and the test runner compares the actual output against it. The most important change was adding &lt;strong&gt;multipage golden tests&lt;/strong&gt;. Previously, all golden tests were single-page, but many bugs only manifested in multi-page output (cross-references between pages, navigation links, index generation). We were missing these entirely because we had no way to test them. We also added &lt;strong&gt;output normalization&lt;/strong&gt; (so platform differences do not cause false failures) and &lt;strong&gt;regression categories&lt;/strong&gt; so tests can be grouped and run selectively. A &lt;strong&gt;&lt;code&gt;run_ci_with_act.py&lt;/code&gt;&lt;/strong&gt; script lets contributors run the full CI pipeline locally using &lt;a href=&quot;https://github.com/nektos/act&quot;&gt;act&lt;/a&gt;.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Test infrastructure commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bf78b1b&quot;&gt;bf78b1b&lt;/a&gt; test: support multipage golden tests&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d7ad1ce&quot;&gt;d7ad1ce&lt;/a&gt; test: output normalization&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ccd7f71&quot;&gt;ccd7f71&lt;/a&gt; test: check int tests results in ctest&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/681b0cd&quot;&gt;681b0cd&lt;/a&gt; chore: assign categories to regression tests&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9146125&quot;&gt;9146125&lt;/a&gt; test: cover additional paths in DocCommentFinalizer.cpp&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8326417&quot;&gt;8326417&lt;/a&gt; test: run_ci_with_act.py script&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5527e9c&quot;&gt;5527e9c&lt;/a&gt; test: testClang_stdCxx default is C++26&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0dfdb02&quot;&gt;0dfdb02&lt;/a&gt; test: –bad is disabled by default&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h1 id=&quot;acknowledgments-and-reflections&quot;&gt;Acknowledgments and Reflections&lt;/h1&gt;

&lt;p&gt;Going into the wild changed MrDocs. The edge cases, the customization requests, and the integration feedback shaped the direction more than any internal roadmap could.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/gennaroprota&quot;&gt;Gennaro Prota&lt;/a&gt;&lt;/strong&gt; drove the reflection integration that reduces maintenance burden across the entire codebase. &lt;strong&gt;&lt;a href=&quot;https://github.com/mizvekov&quot;&gt;Matheus Izvekov&lt;/a&gt;&lt;/strong&gt; hardened CI with coverage, sanitizers, and warnings-as-errors, and migrated dependency management to the bootstrap script. &lt;strong&gt;&lt;a href=&quot;https://github.com/julioest&quot;&gt;Julio Estrada&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href=&quot;https://github.com/rbbeeston&quot;&gt;Robert Beeston&lt;/a&gt;&lt;/strong&gt; delivered the polished public face of MrDocs. &lt;strong&gt;&lt;a href=&quot;https://github.com/K-ballo&quot;&gt;Agustín Bergé&lt;/a&gt;&lt;/strong&gt; contributed AST and metadata fixes including base member shadowing and alias SFINAE detection. &lt;strong&gt;&lt;a href=&quot;https://github.com/jll63&quot;&gt;Jean-Louis Leroy&lt;/a&gt;&lt;/strong&gt; provided detailed feedback from &lt;a href=&quot;https://github.com/boostorg/openmethod&quot;&gt;Boost.OpenMethod&lt;/a&gt; that drove multiple improvements.&lt;/p&gt;

&lt;p&gt;The most requested feature we have not solved yet is &lt;strong&gt;macro support&lt;/strong&gt; (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1127&quot;&gt;#1127&lt;/a&gt;). Macros are expanded before parsing and do not appear in the &lt;a href=&quot;https://en.wikipedia.org/wiki/Abstract_syntax_tree&quot;&gt;AST&lt;/a&gt;. Supporting them would require preprocessor-level integration with &lt;a href=&quot;https://clang.llvm.org/&quot;&gt;Clang&lt;/a&gt;. The work ahead also includes &lt;strong&gt;Lua scripting&lt;/strong&gt;, &lt;strong&gt;metadata transforms&lt;/strong&gt;, and &lt;strong&gt;deeper reflection&lt;/strong&gt;, all direct responses to what users told us they need.&lt;/p&gt;

&lt;p&gt;The biggest lesson from this period is that the problems worth solving are the ones users bring. We spent time on an unknowns framework to decide what to explore, but the most impactful work came from people who showed up with a broken demo page, a missing breadcrumb, or a duplicate ellipsis in their generated docs.&lt;/p&gt;

&lt;p&gt;The complete set of changes is available in the &lt;a href=&quot;https://github.com/cppalliance/mrdocs&quot;&gt;MrDocs repository&lt;/a&gt;.&lt;/p&gt;</content><author><name></name></author><category term="alan" /><summary type="html">The questions changed. For a long time, people asked about MrDocs in the abstract: what formats will it support, how will it handle templates, when will it be ready. Then, gradually, the questions became specific. Jean-Louis Leroy, the author of Boost.OpenMethod, became one of our most active sources of feedback. His library exercises corners of C++ that most projects never touch, which means MrDocs gets tested in ways we would not have anticipated. He wanted to know why his template specializations were not sorted correctly. He wanted macro support because Boost libraries rely heavily on macros. He hit a crash when his doc comments contained HTML tables. These are not theoretical questions about a tool that might exist someday. These are questions from someone who already generated documentation with MrDocs and needs it to work better. In our previous post, we described MrDocs transitioning from prototype to product. This post is about what happened when MrDocs went into the wild. Real Projects, Real Problems The Demo Page Breadcrumbs Without a Navigation File Coordinating Two Independent Extensions Edge Cases in the Wild Rendering and Output Under the Hood The MrDocs Website Exploring the Unknowns Reflection: Replacing Boilerplate with Introspection First Steps Toward Extensions Why We Discarded MrDocs-as-Compiler Contributor Experience Automating PR Reviews CI Infrastructure Test Infrastructure Acknowledgments and Reflections Real Projects, Real Problems %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% mindmap root((Feedback)) First impressions Unstyled demos Custom stylesheets Navigation Orphaned pages Breadcrumbs AST edge cases Parameter packs Friend targets Detail namespaces Rendering Description ordering Code blocks Anchor links Runtime JS engine switch Compiler fallback The Demo Page Right after the previous post, where we announced the MVP and encouraged people to try MrDocs, we noticed the demos page was not doing us any favors. Someone shared MrDocs on a developer community and the website started getting traffic. The landing page looked polished, but visitors clicked through to the demos and saw raw, unstyled HTML: no fonts, no spacing, no colors. The HTML generator produced correct semantic markup, and that is technically the point: users are supposed to customize the output with their own stylesheets. But on the demos page, there was no stylesheet at all, and the result looked broken rather than customizable. The custom stylesheet system added five configuration options (stylesheets, linkcss, copycss, no-default-styles, stylesdir) so projects can match their own branding. A bundled default CSS now ships with MrDocs, and it was refined to remove gradients in favor of solid, readable backgrounds. Stylesheet commits 5fe30c1 feat: custom stylesheets 33d985c chore: version is 0.8.0 Breadcrumbs Without a Navigation File MrDocs generates thousands of reference pages, one per C++ symbol. We maintain an Antora extension, the antora-cpp-reference-extension, that integrates these pages into Antora-based documentation sites. But the generated pages end up orphaned from the navigation tree. Users found the navigation confusing: clicking on “boost” in the breadcrumb did not go where expected, and reference pages had no trail showing where they belonged in the hierarchy. The obvious fix would be to list every page in Antora’s nav.adoc, but maintaining a navigation file with thousands of entries that changes every time a symbol is added or removed is not practical. Worse, Antora renders the navigation file in the sidebar, so listing every reference page would flood the UI with thousands of entries. We discussed the problem extensively with the Antora maintainer on the Antora community chat. His position was clear: Antora was designed so that pages must be in the navigation file. Programmatic editing of navigation is not supported. That was not acceptable for us. We needed breadcrumbs that work for thousands of generated pages without polluting the sidebar or requiring a hand-maintained navigation file. The Antora author’s position was reasonable from his perspective (Antora is a general-purpose documentation tool, not a reference generator), but our use case was fundamentally different from what Antora was designed for. The antora-cpp-reference-extension now builds breadcrumbs independently from the navigation file. MrDocs generates reference pages in a directory structure that mirrors the C++ namespace hierarchy (boost/urls/segments_view.adoc lives inside boost/urls/). The extension uses this structure to reconstruct the breadcrumb trail: each directory maps to a namespace, and the page title (which is the symbol name) becomes the last breadcrumb entry. The result reads naturally: Reference &amp;gt; boost &amp;gt; urls &amp;gt; segments_view. Zero changes to the nav file. The sidebar stays clean. Breadcrumbs appear automatically and update when symbols are added or removed. Breadcrumb and reference extension commits antora-cpp-reference-extension ae95eb2 feat: synthesize reference breadcrumbs without nav files 10a4019 feat: add auto base URL detection 6a6c08b docs: auto-base-url option 4f7c79f refactor: enhance release asset validation Coordinating Two Independent Extensions The antora-cpp-reference-extension generates reference pages and breadcrumbs. The antora-cpp-tagfiles-extension resolves cross-library symbol links (so a reference to boost::system::error_code in Boost.URL’s docs links to the correct page in Boost.System’s docs). These are two independent Antora extensions running as separate jobs. The problem was that the reference extension generates tagfiles as a side effect of producing reference pages, and the tagfiles extension needs the most recent version of those tagfiles to resolve links correctly. MrDocs changes the tagfiles every time the corpus changes. Manually keeping them in sync was not sustainable: committing tagfiles to the repository meant they were always stale by the time the next build ran. We made the extensions coordinate directly. The reference extension now hands its tagfile to the tagfiles extension at build time, so the links always reflect the current state of the documentation. The reference extension also gained auto base URL detection, removing the need for manual path configuration when switching between development and production builds. Extension coordination commits antora-cpp-reference-extension 6e8ffcb feat: antora-cpp-tagfiles-extension coordination 8f12576 chore: version is 0.1.0 antora-cpp-tagfiles-extension 98eba40 feat: antora-cpp-reference-extension coordination 5a1723c feat: add global log level control for missing symbols 453f01b chore: version is 0.1.0 Edge Cases in the Wild As more libraries adopted MrDocs, edge cases in C++ symbol extraction surfaced. Boost.Beast exposed a duplicate ellipsis in parameter pack rendering (#1108, #1129): Before: T&amp;amp; emplace(Args...&amp;amp;&amp;amp;... args) After: T&amp;amp; emplace(Args&amp;amp;&amp;amp;... args) Boost.OpenMethod revealed that friend targets were not resolving correctly. Boost.Buffers uncovered a problem with detail namespaces: when a class inherits from a base in a hidden namespace, the inherited members appeared in the documentation but their doc comments were lost (#1107). We fixed this so derived classes inherit documentation from hidden bases. Unnamed structs also sparked an extended design discussion. When C++ code declares constexpr struct {} f{};, MrDocs needs a stable, unique name for hyperlinks. The team established a collaborative design process using shared documents, with Peter Dimov contributing an insight about C compatibility (typedef struct {} T; makes the struct named in C++). AST and metadata commits c85be75 fix: remove duplicate ellipsis in parameter pack expansion c3dbded fix(ast): prevent TU parent from including unmatched globals 76b7b43 fix(ast): canonicalize friend targets 05f5852 fix(metadata): copy impl-defined base docs 35cf1f6 fix: UsingSymbol is SymbolParent c406d57 fix: preserve extraction mode when copying members from derived classes 4e7ef04 fix: prevent infinite recursion when extracting non-regular base class 0a69301 fix: extract and fix some special member function helpers Rendering and Output Users noticed that the manual description of a symbol was buried below long member tables (#1105). On a class with many members, you had to scroll past the entire member listing before finding the author’s explanation of what the class does. We moved the description to appear immediately after the synopsis, matching what cppreference does. Other rendering issues included HTML code blocks not wrapped in &amp;lt;pre&amp;gt; tags, anchor links appearing when the wrapper element was missing, and the Handlebars template engine accumulating special name re-mappings that conflated different symbols. Rendering and output commits d90eae6 fix: hide anchor links when wrapper is not included 92491de fix: manual description comes before member lists 58bf524 fix: remove all special name re-mappings for Handlebars 2a75692 fix: HTML code blocks not wrapped in pre tags 1ebff32 fix: bottomUpTraverse() skips ListBlock items 7b118b1 fix: missing @return command in doc comment Under the Hood We fixed a compiler fallback issue where MrDocs failed when the compilation database referenced a compiler that was not available on the current machine, and corrected sanitizer flag propagation so that UBSan and TSan do not unnecessarily propagate to dependency builds. Build and toolchain commits 235f5c8 fix: fall back to system compilers when database compiler is unavailable f320581 fix: don’t pass sanitizer to dependency builds for UBSan/TSan The MrDocs Website While we were fixing the generated output, Robert Beeston and Julio Estrada were redesigning the MrDocs website. Robert led the design direction, working with a team to develop a visual identity that balances a distinctive retro aesthetic with modern readability, including a dark theme. Julio handled the implementation: mobile-responsive layout, UI styling improvements, cleaner backgrounds and styles, Open Graph and Twitter meta tags for social sharing, and a close button for the docs navigation on smaller screens. For a documentation tool, the website is the first thing potential users see. Having a polished, memorable landing page matters more than it might for other kinds of projects. Exploring the Unknowns The team made a deliberate choice: instead of following a traditional feature roadmap, we would focus on areas of uncertainty (#1113). These were open questions that blocked multiple design decisions at once: MrDocs-as-compiler (#1073): should MrDocs emit “object” files for later “linking,” like a compiler? Scripting extensions (#1128, #881): how should users extend and transform documentation output? Plugins (#58, #1044): how should third-party code register new generators? JSON-only MrDocs: should we add a JSON output format alongside (or replacing) the existing XML structured output? Reflection (#1114): how do we reduce the maintenance burden of the growing metadata model? Cross-linking (#1072): how do we reference symbols in other libraries? The motivation was practical. Each Boost library that adopted MrDocs had its own needs that could not be met by the core tool alone. Boost.URL has implementation_defined namespaces with internal code that should be hidden or transformed in the documentation. Boost.Capy has detail types that should be presented as user-facing types. Coroutines are represented as types in the AST but should be documented as functions. We want MrDocs to be smart enough, with project-specific extensions, that library authors do not have to do workarounds in the source code just to get the documentation right. Rather than hard-coding solutions for each library, the unknowns framework asked: what general mechanisms would let every library solve its own documentation problems? %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% mindmap root((Unknowns)) Scripting extensions JS helpers Lua Plugins Generator API DLL loading Reflection Boost.Describe MrDocs.Describe Cross-linking Tagfiles Antora coordination JSON-only MrDocs MrDocs-as-compiler Reflection: Replacing Boilerplate with Introspection MrDocs models many kinds of C++ symbols: functions, classes, namespaces, enums, typedefs, concepts, and more. Each symbol type has metadata, and every piece of code that touches that metadata had to enumerate all fields by hand. Adding a single field to a symbol type meant updating it in: Schema files that describe the metadata format Generators (HTML, AsciiDoc, XML) that produce the output Templates that render individual pages Operators like comparison functions, merge functions (e.g., merging symbols from different translation units when only one is documented), and equality checks Documentation describing the metadata The code itself that populates and transforms the metadata That is roughly ten to fifteen places per field, and missing one caused CI failures that blocked everyone. This was one of the unknowns we identified: how to reduce the maintenance burden as the data model grows. Worse, downstream users who had their own templates and extensions also had to learn about the new fields and update everything accordingly. Gennaro Prota, with his strong background in generic programming and metaprogramming, took ownership of the reflection problem. The work progressed through several stages: Integrate Boost.Describe into the metadata system, replacing hand-written serialization functions Add $meta.type and $meta.bases to all DOM objects so templates can introspect the corpus Replace the XML generator with a reflection-based one (no more hand-maintained XML output) Build a custom reflection system (MrDocs.Describe) tailored to our needs Replace per-type operators with a single generic template The result eliminated the second step entirely: adding a new field to a symbol type no longer requires touching ten other files. The description drives everything, and the serialization, comparison, and merge logic derive from it automatically. Boost.Describe and Boost.Mp11 are private dependencies that do not appear in public headers. Along the way, Gennaro also added function object support, fixed Markdown inline formatting and missing dependent array bounds. Reflection and metadata commits Reflection (Gennaro Prota) d490880 refactor(metadata): integrate Boost.Describe c4dd89a feat: add $meta.type and $meta.bases to all DOM objects d4a64ef fix: replace the XML generator with a reflection-based one 6ce961f refactor: add custom reflection facilities (MrDocs.Describe) eb68494 refactor: migrate all reflection consumers to MrDocs.Describe 8f5391b refactor: replace per-type merge() one-liners with a single generic template e749144 feat: make the reflection consumers public 1ed76ad refactor: replace most per-type tag_invoke overloads with a single generic template 0246935 refactor: replace per-type operator==() and operator&amp;lt;=&amp;gt;() with a single generic template Features and fixes (Gennaro Prota) 93a5032 feat: add function object support f35ebcd fix: rendering of Markdown inline formatting and bullet lists 4ae305b fix: missing dependent array bounds in the output 72fba40 test: add golden tests for a partial class template specialization The reflection work is the foundation for everything that comes next: the extension system, the upcoming Lua scripting, and the metadata transformation pipeline. First Steps Toward Extensions MrDocs supports two extension points: JavaScript for Handlebars template helpers, and Lua for more powerful scripting. The JavaScript engine had been Duktape, but Duktape is no longer actively maintained and only supports ES5.1. We needed a replacement. We evaluated several alternatives (#881): Engine JS Support Windows/MSVC Size License QuickJS ES2023 No (clang-cl only) ~370 KB MIT PrimJS ES2019 No (POSIX only) ~370 KB MIT JerryScript ES5.1 + ES2022 subset Yes ~200 KB Apache 2.0 Escargot ES2025 subset Yes ~400-500 KB LGPL 2.1 MuJS ES5.1 Yes ~200-300 KB ISC Moddable XS ES2025 (~99%) Yes (via SDK) ~100-300 KB Apache/GPL/LGPL mJS Restricted ES6 Yes ~50-60 KB GPL 2.0 / Commercial Elk Minimal ES6 Yes ~20-30 KB GPL 2.0 / Commercial We first experimented with QuickJS, which had the best ES support. But it requires C11 features like &amp;lt;stdatomic.h&amp;gt; and __int128 that plain MSVC does not support. On Windows, users would need Clang with the Visual Studio runtime. PrimJS was POSIX-only. We settled on JerryScript: it supports Windows and MSVC natively, has a small footprint (~200 KB), and covers enough of ES2022 for template helpers. Unlike most alternatives in the table, JerryScript was designed from the ground up to be embedded in other applications, which makes it more like Lua and less like engines that target browsers or standalone runtimes. The JavaScript helpers extension was a single commit but a large one: 85 files changed, 4,287 insertions. The work included: Replacing Duktape with JerryScript across the entire codebase, including build scripts, CMake recipes, and third-party patches Rewriting the C++ JavaScript bindings (JavaScript.hpp and JavaScript.cpp) with shared context lifetime, safer value accessors, and clearer error messages A layered addon system where projects provide JavaScript helpers in a directory structure (generator/common/helpers/ for shared helpers, generator/html/helpers/ for format-specific ones). Multiple addon directories can be layered, so a project’s helpers override or extend the defaults. Golden tests for extension output (js-helper/, js-helper-layering/) to verify that helpers produce the expected documentation 1,335 lines of new JavaScript binding tests covering the engine lifecycle, value conversion, error handling, and helper registration Combined with the public API for registering custom generators, MrDocs now supports customization beyond templates. A library like Boost.Capy could write an extension that transforms its coroutine types into function documentation, without any changes to MrDocs itself. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% flowchart LR A[Clang AST] --&amp;gt; B[Extraction] B --&amp;gt; C[Corpus] C --&amp;gt; D[Transformation Extensions] D --&amp;gt; E[Handlebars Generators] E --&amp;gt; F[Documentation Templates] F --&amp;gt; H[HTML / AsciiDoc] F --&amp;gt; G[Template Extensions] G --&amp;gt; F D -.-&amp;gt; I[XML] The vision for extensions has two layers: Transformation extensions operate on the corpus between extraction and generation. A library could transform its internal types into the documentation structure it wants. This layer is not yet implemented. Template extensions (JavaScript helpers) operate inside the Handlebars templates that produce HTML and AsciiDoc output. This is the layer we shipped. Lua scripts for more powerful scripting in both layers Extension and generator commits 0f3ecb4 feat: javascript helpers extension (85 files, 4,287 insertions) 930a5ea fix: jerry_port_context_free wrong signature causes silent corruption 8da0930 feat(lib): public API for generator registration 788c1ba feat(generators): tables for symbols have headers Why We Discarded MrDocs-as-Compiler One unknown we explored and deliberately discarded was MrDocs-as-compiler (#1073). The idea, proposed by Peter Dimov, was to treat MrDocs like a compiler: emit “object” files per translation unit, then “link” them to produce the final reference. CMake would invoke MrDocs as if it were Clang, with identical command-line options. We spent time studying tools that work this way: clang-tidy, clang-doc, include-what-you-use. What we found is that tricking CMake into thinking MrDocs is a real compiler is not trivial. Every tool that tries this approach ends up needing either a coordinator binary (reimplementing what MrDocs already has) or CMake helper scripts. Both add workflows rather than simplifying them. The experience from the Boost ecosystem reinforced this: no Boost project uses any of these compiler-like tools for static analysis, and the reason is complexity. People who find the compilation database workflow too involved are going to be even less inclined to adopt a tool that requires them to pretend to be a compiler. We decided to keep MrDocs as a single-step tool that reads a compilation database and produces output, rather than splitting it into a multi-binary pipeline that would need its own coordination layer. Contributor Experience As more people contributed to MrDocs, the gap between “clone the repo” and “submit a useful PR” needed closing. The biggest change was the bootstrap script, which reduced the entire build setup to a single python bootstrap.py command (covered in a separate post). Beyond the bootstrap, we split the contributor guide into focused sections, added reference documentation for MrDocs comment syntax (so contributors know what @copydoc, @see, and other commands do), and created a run_all_tests script that runs the full test suite locally without needing to understand the CMake test configuration. Onboarding commits b103cba docs(reference): mrdocs comments 9b7ec24 feat(util): run_all_tests script 5902699 docs: update packages 302f0a6 docs: split contribute.adoc guide Automating PR Reviews MrDocs PRs tend to be large and hard to review. A single PR might touch the AST visitor, the Handlebars templates, the Antora extension, the CI configuration, and hundreds of golden test files (when an intentional change to the output format updates the expected output for every test case). We found ourselves making the same review comments over and over. We set up Danger.js to catch these patterns before human reviewers see the PR. The most important check is detecting when source code changes do not include corresponding tests: if someone changes extraction logic but does not update the golden tests, or changes a template without updating the expected output, Danger flags it. Beyond that: Categorizes all file changes into scopes (source, tests, golden-tests, docs, CI, build, tooling) and generates a summary table showing churn per scope Validates commit messages against Conventional Commits format Warns when a single commit exceeds 2,000 lines of source churn (encouraging smaller, reviewable slices) Flags mismatched commit types (e.g., a feat: commit that only touches test files suggests test: instead) Rejects PR descriptions under 40 characters Ignores the test check for refactor-only PRs where the tests are expected to remain unchanged Even when there are no warnings, the scope summary table gives reviewers an immediate sense of what a large PR touches. On a PR that changes 500 lines of source and 3,000 lines of golden tests, the table makes it clear that the bulk of the diff is expected test output, not new logic. Danger.js commits 6f5f6e9 ci: setup danger.js 5429b2e ci(danger): align report table and add top-files summary 240921d ci(danger): split PR target ci workflows 08c46b6 ci(danger): correct file delta calculation in reports 2cfd081 ci(danger): adjust large commit threshold 71845c8 ci(danger): map root files into explicit scopes 17b0a57 ci(danger): ignore test check for refactor-only PRs 6481fd3 ci(danger): simplify CI naming fd7d248 ci(danger): omit empty sections from report 7502961 ci(danger): categorize util/bootstrap as build scope 57e191e ci(danger): better markdown format CI Infrastructure We integrated Codecov for tracking test coverage across PRs and switched from GCC to Clang for coverage (more accurate AST-based measurement). CI speed was a recurring concern: we skipped remote documentation generation on PRs, sped up release demos, and skipped long tests that were not catching new bugs. LLVM cache keys were unified to avoid redundant builds, and CTest timeouts were increased for sanitizer jobs that run significantly slower. Matheus Izvekov contributed the Clang coverage switch, fixed an infinite recursion in extraction, and moved the project to use system libs by default. CI infrastructure commits ed6b3bc ci: add codecov configuration 5426a0a ci: use clang for coverage d629173 fix(ci): unify redundant LLVM cache keys 36a3b51 ci: update actions to v1.9.1 7b2103a ci: increase CTest timeout for MSan jobs 086becc ci: increase the ctest timeout to 9000 adb6821 ci(cpp-matrix): remove the optimized-debug factor 9507a38 ci: simplify CI workflow and upgrade cpp-actions to @develop 9a5bd3c ci: skip remote documentation generation on PRs 637011f ci: detect and report demo generation failures 084322d ci: speed up release demos on PRs 471951d ci: skip long tests to speed up CI a5f160b ci: increase test coverage for the new XML generator b1fc43c ci: exclude Reflection.hpp from coverage a1f9a82 ci: accept any g++-14 version c136a46 ci(website): preserve roadmap directory during deployment 4763d86 revert(ci): remove premature roadmap report step 3462996 ci: revert coverage changes 8b2c3e9 ci: align llvm-sanitizer-config with archive basename fdff573 ci: gitignore CI node_modules 757d446 fix(ci): update the fmt branch reference from master to main a3366b0 fix(ci): name rolling release packages after the branch Test Infrastructure MrDocs uses golden tests: the expected output for every test case is stored as a file, and the test runner compares the actual output against it. The most important change was adding multipage golden tests. Previously, all golden tests were single-page, but many bugs only manifested in multi-page output (cross-references between pages, navigation links, index generation). We were missing these entirely because we had no way to test them. We also added output normalization (so platform differences do not cause false failures) and regression categories so tests can be grouped and run selectively. A run_ci_with_act.py script lets contributors run the full CI pipeline locally using act. Test infrastructure commits bf78b1b test: support multipage golden tests d7ad1ce test: output normalization ccd7f71 test: check int tests results in ctest 681b0cd chore: assign categories to regression tests 9146125 test: cover additional paths in DocCommentFinalizer.cpp 8326417 test: run_ci_with_act.py script 5527e9c test: testClang_stdCxx default is C++26 0dfdb02 test: –bad is disabled by default Acknowledgments and Reflections Going into the wild changed MrDocs. The edge cases, the customization requests, and the integration feedback shaped the direction more than any internal roadmap could. Gennaro Prota drove the reflection integration that reduces maintenance burden across the entire codebase. Matheus Izvekov hardened CI with coverage, sanitizers, and warnings-as-errors, and migrated dependency management to the bootstrap script. Julio Estrada and Robert Beeston delivered the polished public face of MrDocs. Agustín Bergé contributed AST and metadata fixes including base member shadowing and alias SFINAE detection. Jean-Louis Leroy provided detailed feedback from Boost.OpenMethod that drove multiple improvements. The most requested feature we have not solved yet is macro support (#1127). Macros are expanded before parsing and do not appear in the AST. Supporting them would require preprocessor-level integration with Clang. The work ahead also includes Lua scripting, metadata transforms, and deeper reflection, all direct responses to what users told us they need. The biggest lesson from this period is that the problems worth solving are the ones users bring. We spent time on an unknowns framework to decide what to explore, but the most impactful work came from people who showed up with a broken demo page, a missing breadcrumb, or a duplicate ellipsis in their generated docs. The complete set of changes is available in the MrDocs repository.</summary></entry><entry><title type="html">Boost.URL: Audited, Constexpr, and Polished</title><link href="http://cppalliance.org/alan/2026/04/21/Alan.html" rel="alternate" type="text/html" title="Boost.URL: Audited, Constexpr, and Polished" /><published>2026-04-21T00:00:00+00:00</published><updated>2026-04-21T00:00:00+00:00</updated><id>http://cppalliance.org/alan/2026/04/21/Alan</id><content type="html" xml:base="http://cppalliance.org/alan/2026/04/21/Alan.html">&lt;p&gt;We had been putting off the &lt;a href=&quot;https://github.com/boostorg/url&quot;&gt;Boost.URL&lt;/a&gt; security review for a while. There was always something more urgent. When the review finally happened, it confirmed what we hoped: the core parsing logic held up well. Around the same time, a constexpr feature request that we had been dismissing suddenly became a cross-library collaboration when other Boost maintainers started applying changes to their own libraries. And while working on &lt;a href=&quot;https://github.com/cppalliance/beast2&quot;&gt;Boost.Beast2&lt;/a&gt; integration, we noticed friction in common URL operations that led us to clear a backlog of usability improvements.&lt;/p&gt;

&lt;!-- prettier-ignore --&gt;
&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#security-review&quot; id=&quot;markdown-toc-security-review&quot;&gt;Security Review&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#round-1-1207-findings-february-2-2026&quot; id=&quot;markdown-toc-round-1-1207-findings-february-2-2026&quot;&gt;Round 1: 1,207 Findings (February 2, 2026)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#round-2-27-findings-february-17-2026&quot; id=&quot;markdown-toc-round-2-27-findings-february-17-2026&quot;&gt;Round 2: 27 Findings (February 17, 2026)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#round-3-15-findings-april-2-2026&quot; id=&quot;markdown-toc-round-3-15-findings-april-2-2026&quot;&gt;Round 3: 15 Findings (April 2, 2026)&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#compile-time-url-parsing&quot; id=&quot;markdown-toc-compile-time-url-parsing&quot;&gt;Compile-Time URL Parsing&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#the-conversation-that-changed-everything&quot; id=&quot;markdown-toc-the-conversation-that-changed-everything&quot;&gt;The Conversation That Changed Everything&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#error-handling-at-compile-time&quot; id=&quot;markdown-toc-error-handling-at-compile-time&quot;&gt;Error Handling at Compile Time&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the--wmaybe-uninitialized-problem&quot; id=&quot;markdown-toc-the--wmaybe-uninitialized-problem&quot;&gt;The &lt;code&gt;-Wmaybe-uninitialized&lt;/code&gt; Problem&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-shared-library-problem&quot; id=&quot;markdown-toc-the-shared-library-problem&quot;&gt;The Shared Library Problem&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-result&quot; id=&quot;markdown-toc-the-result&quot;&gt;The Result&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#usability-improvements&quot; id=&quot;markdown-toc-usability-improvements&quot;&gt;Usability Improvements&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#convenience-functions&quot; id=&quot;markdown-toc-convenience-functions&quot;&gt;Convenience Functions&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#c20-integration&quot; id=&quot;markdown-toc-c20-integration&quot;&gt;C++20 Integration&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#performance&quot; id=&quot;markdown-toc-performance&quot;&gt;Performance&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments-and-reflections&quot; id=&quot;markdown-toc-acknowledgments-and-reflections&quot;&gt;Acknowledgments and Reflections&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;security-review&quot;&gt;Security Review&lt;/h1&gt;

&lt;p&gt;The &lt;a href=&quot;https://cppalliance.org/&quot;&gt;C++ Alliance&lt;/a&gt; arranges professional security audits for the libraries we maintain. The results for &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/beast/doc/html/beast/quick_start/security_review_bishop_fox.html&quot;&gt;Boost.Beast (2020)&lt;/a&gt; and &lt;a href=&quot;https://cppalliance.org/pdf/C%20Plus%20Plus%20Alliance%20-%20Boost%20JSON%20Security%20Assessment%202020%20-%20Assessment%20Report%20-%2020210317.pdf&quot;&gt;Boost.JSON (2021)&lt;/a&gt; are publicly available. For Boost.URL, we always had the plan but kept delaying because there was so much other work to do first. That delay turned out to be a good thing: we found and fixed issues ourselves first, so the reviewers could focus on the subtle problems.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.laurellye.com/&quot;&gt;Laurel Lye Systems Engineering&lt;/a&gt; conducted &lt;strong&gt;three rounds&lt;/strong&gt; of assessment. Each finding was manually reviewed against the source code and categorized as a confirmed bug (fixed), a false positive, or a deliberate design choice. For every confirmed bug, we also proposed new test cases to prevent regressions.&lt;/p&gt;

&lt;h2 id=&quot;round-1-1207-findings-february-2-2026&quot;&gt;Round 1: 1,207 Findings (February 2, 2026)&lt;/h2&gt;

&lt;p&gt;The first assessment was the broadest. Of 1,207 findings, &lt;strong&gt;15 were confirmed bugs&lt;/strong&gt; resulting in fix commits. The vast majority were false positives or by-design patterns:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Verdict&lt;/th&gt;
      &lt;th&gt;CRITICAL&lt;/th&gt;
      &lt;th&gt;HIGH&lt;/th&gt;
      &lt;th&gt;MEDIUM&lt;/th&gt;
      &lt;th&gt;LOW&lt;/th&gt;
      &lt;th&gt;INFO&lt;/th&gt;
      &lt;th&gt;Total&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;FIXED&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;9&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;15&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;FALSE POSITIVE&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;47&lt;/td&gt;
      &lt;td&gt;46&lt;/td&gt;
      &lt;td&gt;186&lt;/td&gt;
      &lt;td&gt;110&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;392&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;BY DESIGN&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;129&lt;/td&gt;
      &lt;td&gt;445&lt;/td&gt;
      &lt;td&gt;170&lt;/td&gt;
      &lt;td&gt;56&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;800&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;4&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;185&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;491&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;358&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;169&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;1,207&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The single &lt;strong&gt;CRITICAL&lt;/strong&gt; fix was a loop condition in &lt;code&gt;url_base&lt;/code&gt; that dereferenced &lt;code&gt;*it&lt;/code&gt; before checking &lt;code&gt;it != end&lt;/code&gt;. Three other CRITICAL findings were false positives: the audit flagged raw-pointer writes in the format engine, but these use a two-phase measure/format design that guarantees the buffer is pre-sized correctly.&lt;/p&gt;

&lt;p&gt;Most false positives fell into recognizable themes:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;&lt;code&gt;BOOST_ASSERT&lt;/code&gt; as sole bounds check&lt;/strong&gt; (29 HIGH findings): internal &lt;code&gt;_unsafe&lt;/code&gt; functions rely on preconditions validated by the public API. The &lt;code&gt;_unsafe&lt;/code&gt; suffix signals the contract. This is the standard Boost/STL pattern (&lt;code&gt;std::vector::operator[]&lt;/code&gt; vs &lt;code&gt;at()&lt;/code&gt;).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Non-owning view lifetime safety&lt;/strong&gt; (27 HIGH findings): &lt;code&gt;string_view&lt;/code&gt; and &lt;code&gt;url_view&lt;/code&gt; types do not own their data. The audit flagged potential use-after-free, but lifetime management is the caller’s responsibility by design.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Atomic reference counting&lt;/strong&gt; (multiple findings across all rounds): the audit tool did not recognize the &lt;code&gt;#ifdef BOOST_URL_DISABLE_THREADS&lt;/code&gt; guard that switches between &lt;code&gt;std::atomic&amp;lt;std::size_t&amp;gt;&lt;/code&gt; and plain &lt;code&gt;std::size_t&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;details&gt;
  &lt;summary&gt;Round 1 fix commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/bcdc891&quot;&gt;bcdc891&lt;/a&gt; CRITICAL: url_base loop condition order&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/ec15fce&quot;&gt;ec15fce&lt;/a&gt; HIGH: encode() UB pointer arithmetic for small buffers&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/81fcb95&quot;&gt;81fcb95&lt;/a&gt; HIGH: LLONG_MIN negation UB in format&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/42c8fe7&quot;&gt;42c8fe7&lt;/a&gt; HIGH: ci_less::operator() return type&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/76279f5&quot;&gt;76279f5&lt;/a&gt; HIGH: incorrect noexcept in segments_base::front() and back()&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/d4ae92d&quot;&gt;d4ae92d&lt;/a&gt; HIGH: recycled_ptr::get() nullptr when empty&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/8d98fe6&quot;&gt;8d98fe6&lt;/a&gt; LOW: decode() noexcept on throwing template&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;The proportion of false positives to confirmed bugs was large enough that we discussed a second round with Laurel Lye, where we shared the false positive categories we had identified so they could be more targeted.&lt;/p&gt;

&lt;h2 id=&quot;round-2-27-findings-february-17-2026&quot;&gt;Round 2: 27 Findings (February 17, 2026)&lt;/h2&gt;

&lt;p&gt;The second assessment was more targeted. The reviewers had learned from our Round 1 triage and produced fewer false positives:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Verdict&lt;/th&gt;
      &lt;th&gt;HIGH&lt;/th&gt;
      &lt;th&gt;MEDIUM&lt;/th&gt;
      &lt;th&gt;LOW&lt;/th&gt;
      &lt;th&gt;INFO&lt;/th&gt;
      &lt;th&gt;Total&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;FIXED&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;7&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;12&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;FALSE POSITIVE&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;4&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;BY DESIGN&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;2&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;ALREADY FIXED&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;5&lt;/td&gt;
      &lt;td&gt;4&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;9&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;9&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;10&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;6&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;2&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;27&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;9 of the 27 findings had &lt;strong&gt;already been fixed&lt;/strong&gt; in Round 1 commits. The new confirmed bugs included a heap overflow in format center-alignment padding (&lt;code&gt;lpad = w / 2&lt;/code&gt; used total width instead of padding amount), an infinite loop in &lt;code&gt;decode_view::ends_with&lt;/code&gt; with empty strings, and an OOB read in &lt;code&gt;ci_is_less&lt;/code&gt; on mismatched-length strings.&lt;/p&gt;

&lt;p&gt;Both rounds are tracked in &lt;a href=&quot;https://github.com/boostorg/url/pull/982&quot;&gt;PR #982&lt;/a&gt;.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Round 2 fix commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/d06df88&quot;&gt;d06df88&lt;/a&gt; HIGH: format center-alignment padding&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/4fe2438&quot;&gt;4fe2438&lt;/a&gt; HIGH: decode_view::ends_with with empty string&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/f5727ed&quot;&gt;f5727ed&lt;/a&gt; HIGH: stale pattern n.path after colon-encoding&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/d045d71&quot;&gt;d045d71&lt;/a&gt; HIGH: ci_is_less OOB read&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/88efbae&quot;&gt;88efbae&lt;/a&gt; HIGH: recycled_ptr copy self-assignment&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/fe4bdf6&quot;&gt;fe4bdf6&lt;/a&gt; MEDIUM: url move self-assignment&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/ab5d812&quot;&gt;ab5d812&lt;/a&gt; MEDIUM: encode_one signed char right-shift&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/b662a8f&quot;&gt;b662a8f&lt;/a&gt; MEDIUM: encode() noexcept on throwing template&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/5bc52ed&quot;&gt;5bc52ed&lt;/a&gt; LOW: port_rule has_number for port zero at end of input&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/9c9850f&quot;&gt;9c9850f&lt;/a&gt; INFO: ci_equal arguments by const reference&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/4f466ce&quot;&gt;4f466ce&lt;/a&gt; test: public interface boundary and fuzz tests&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;round-3-15-findings-april-2-2026&quot;&gt;Round 3: 15 Findings (April 2, 2026)&lt;/h2&gt;

&lt;p&gt;The third round was the shortest and the most precise. Of 15 findings, &lt;strong&gt;4 were confirmed bugs&lt;/strong&gt; and &lt;strong&gt;11 were false positives&lt;/strong&gt;. No CRITICAL findings. The false positives were the same recurring themes (atomic refcounting, pre-validated format strings, preconditions guaranteed by callers).&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Verdict&lt;/th&gt;
      &lt;th&gt;HIGH&lt;/th&gt;
      &lt;th&gt;MEDIUM&lt;/th&gt;
      &lt;th&gt;LOW&lt;/th&gt;
      &lt;th&gt;Total&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;FIXED&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;4&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;FALSE POSITIVE&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;4&lt;/td&gt;
      &lt;td&gt;6&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;11&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;4&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;7&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;4&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;15&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The confirmed bugs were more subtle: a decoded-length calculation error in &lt;code&gt;segments_iter_impl::decrement()&lt;/code&gt; that only manifested during backward iteration over percent-encoded paths, two &lt;a href=&quot;https://en.cppreference.com/w/cpp/language/noexcept_spec&quot;&gt;&lt;code&gt;noexcept&lt;/code&gt;&lt;/a&gt; specifications on functions that allocate &lt;code&gt;std::string&lt;/code&gt; (which can throw &lt;code&gt;bad_alloc&lt;/code&gt;), and a &lt;code&gt;memcpy&lt;/code&gt; with null source when size is zero (undefined behavior per the C standard, even though it copies nothing).&lt;/p&gt;

&lt;p&gt;This round is tracked in &lt;a href=&quot;https://github.com/boostorg/url/pull/988&quot;&gt;PR #988&lt;/a&gt;.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Round 3 fix commits&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/3ca2d71&quot;&gt;3ca2d71&lt;/a&gt; MEDIUM: segments_iter_impl decoded-length in decrement()&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/b1f6f8e&quot;&gt;b1f6f8e&lt;/a&gt; LOW: param noexcept on throwing constructor&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/d42c748&quot;&gt;d42c748&lt;/a&gt; LOW: string_view_base noexcept on throwing operator std::string()&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/f963383&quot;&gt;f963383&lt;/a&gt; LOW: url_view memcpy with null source when size is zero&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;The progression from 1,207 findings to 27 to 15 shows the reviewers learning the peculiarities of our codebase. The ratio of false positives dropped with each round, and the confirmed bugs got more subtle.&lt;/p&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
mindmap
  root((Confirmed Bugs))
    UB in edge cases
      encode_one right-shift
      LLONG_MIN negation
      pointer arithmetic
    Self-assignment
      url move
      recycled_ptr copy
    OOB reads
      ci_is_less
      decode_view ends_with
    Incorrect noexcept
      encode / decode
      segments_base front/back
      param constructor
      string_view_base operator
    Iterator bugs
      segments decoded-length
    Null pointer
      recycled_ptr get
      url_view memcpy
&lt;/div&gt;

&lt;h1 id=&quot;compile-time-url-parsing&quot;&gt;Compile-Time URL Parsing&lt;/h1&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/boostorg/url/issues/890&quot;&gt;&lt;code&gt;constexpr&lt;/code&gt; URL parsing&lt;/a&gt; has been one of the most recurring requests since the library’s inception. Every few months someone would ask about it, and every few months we would decide the refactoring cost was too high. The parsing engine is heavily buffer-oriented, and moving enough code into headers for &lt;a href=&quot;https://en.cppreference.com/w/cpp/language/constexpr&quot;&gt;&lt;code&gt;constexpr&lt;/code&gt;&lt;/a&gt; evaluation required careful refactoring without breaking the shared library build.&lt;/p&gt;

&lt;p&gt;When we finally prototyped it, the diff touched thousands of lines, but most of those were &lt;strong&gt;code being moved from source files to headers&lt;/strong&gt; rather than new logic. The actual new code was limited to alternative code paths to bypass non-literal types and refactoring &lt;code&gt;url_view_base&lt;/code&gt; to eliminate a self-referencing pointer that prevented &lt;code&gt;constexpr&lt;/code&gt; evaluation. Still, given the size of the change, we initially marked it as unactionable and moved on to the security review.&lt;/p&gt;

&lt;p&gt;Beyond the refactoring cost, we had &lt;strong&gt;blockers beyond our control&lt;/strong&gt;. Our parsing code depended on &lt;a href=&quot;https://github.com/boostorg/optional/issues/143&quot;&gt;&lt;code&gt;boost::optional&lt;/code&gt;&lt;/a&gt; (not a literal type, no constexpr constructors), &lt;a href=&quot;https://github.com/boostorg/variant2&quot;&gt;&lt;code&gt;boost::variant2&lt;/code&gt;&lt;/a&gt; (not literal when containing &lt;code&gt;optional&lt;/code&gt;), and &lt;a href=&quot;https://github.com/boostorg/system/issues/141&quot;&gt;&lt;code&gt;boost::system::result&lt;/code&gt;&lt;/a&gt; (could not be constructed with a custom &lt;code&gt;error_code&lt;/code&gt; in constexpr because &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/system/doc/html/system.html#ref_error_category&quot;&gt;&lt;code&gt;error_category::failed()&lt;/code&gt;&lt;/a&gt; is virtual). Without changes to those libraries, constexpr URL parsing was not possible regardless of how much we refactored our own code.&lt;/p&gt;

&lt;h2 id=&quot;the-conversation-that-changed-everything&quot;&gt;The Conversation That Changed Everything&lt;/h2&gt;

&lt;p&gt;Then &lt;a href=&quot;https://github.com/pdimov&quot;&gt;Peter Dimov&lt;/a&gt;, the maintainer of &lt;a href=&quot;https://github.com/boostorg/system&quot;&gt;Boost.System&lt;/a&gt; and &lt;a href=&quot;https://github.com/boostorg/variant2&quot;&gt;Boost.Variant2&lt;/a&gt;, joined the &lt;a href=&quot;https://github.com/boostorg/url/issues/890&quot;&gt;conversation&lt;/a&gt;. We had assumed that &lt;code&gt;system::result&amp;lt;T&amp;gt;&lt;/code&gt; could not be &lt;code&gt;constexpr&lt;/code&gt; in C++14 because it wraps &lt;code&gt;error_code&lt;/code&gt;, which uses virtual functions. Peter &lt;a href=&quot;https://github.com/boostorg/url/issues/890#issuecomment-2720949684&quot;&gt;pointed out&lt;/a&gt; that &lt;strong&gt;&lt;code&gt;system::result&amp;lt;T&amp;gt;&lt;/code&gt; is already a literal type&lt;/strong&gt; in C++14 when &lt;code&gt;T&lt;/code&gt; is literal and the error code is not custom. Boost.URL uses a &lt;strong&gt;custom error code category&lt;/strong&gt;, and constructing a &lt;code&gt;result&lt;/code&gt; from a custom &lt;code&gt;error_code&lt;/code&gt; requires calling &lt;code&gt;error_category::failed()&lt;/code&gt;, which is virtual and therefore not &lt;code&gt;constexpr&lt;/code&gt; before C++20. Peter &lt;a href=&quot;https://github.com/boostorg/url/issues/890#issuecomment-3869061934&quot;&gt;offered to fix this&lt;/a&gt; in Boost.System (&lt;a href=&quot;https://github.com/boostorg/system/issues/141&quot;&gt;#141&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/system/commit/af53f17&quot;&gt;af53f17&lt;/a&gt;) for C++20 so that custom error codes would also work at compile time.&lt;/p&gt;

&lt;div class=&quot;admonition&quot;&gt;&lt;div class=&quot;admonition-title&quot;&gt;Allowing constexpr virtual functions in C++20&lt;/div&gt;
&lt;p&gt;Peter Dimov is also one of the authors of &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1064r0.html&quot;&gt;P1064: “Allowing Virtual Function Calls in Constant Expressions”&lt;/a&gt;, the C++ committee proposal that made &lt;code&gt;constexpr&lt;/code&gt; virtual functions possible in C++20. The paper uses &lt;code&gt;error_code&lt;/code&gt; and &lt;code&gt;error_category&lt;/code&gt; as the motivating example.&lt;/p&gt;
&lt;/div&gt;

&lt;p&gt;That &lt;strong&gt;shifted the problem&lt;/strong&gt;. Instead of building our own &lt;code&gt;constexpr_result&amp;lt;T&amp;gt;&lt;/code&gt; type to bypass the entire error handling system, we could use &lt;code&gt;system::result&lt;/code&gt; directly in C++20. The scope of the refactoring shrank, and we focused on &lt;strong&gt;C++20 as the initial target&lt;/strong&gt;. The remaining blocker was that &lt;code&gt;system::result&amp;lt;T&amp;gt;&lt;/code&gt; requires &lt;code&gt;T&lt;/code&gt; to be a literal type, and we use &lt;code&gt;boost::optional&lt;/code&gt; heavily in our parsing code. &lt;strong&gt;&lt;code&gt;boost::optional&lt;/code&gt; was not a literal type.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/akrzemi1&quot;&gt;Andrzej Krzemieński&lt;/a&gt;, the Boost.Optional maintainer, &lt;a href=&quot;https://github.com/boostorg/optional/issues/143&quot;&gt;started working on it&lt;/a&gt;. The conversation went back and forth on the &lt;strong&gt;C++14 constraints&lt;/strong&gt;: &lt;code&gt;std::addressof&lt;/code&gt; is not &lt;code&gt;constexpr&lt;/code&gt; until C++17, mandatory copy elision is only available in C++17, and there were questions about what subset of constructors could realistically become &lt;code&gt;constexpr&lt;/code&gt; in C++14. After several iterations (including a &lt;code&gt;feature/constexpr&lt;/code&gt; branch), the &lt;a href=&quot;https://github.com/boostorg/optional/commit/3df2337&quot;&gt;&lt;strong&gt;constexpr implementation&lt;/strong&gt; landed on &lt;code&gt;develop&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;&lt;code&gt;optional&lt;/code&gt; becoming literal&lt;/strong&gt;, &lt;code&gt;boost::variant2&lt;/code&gt; containing &lt;code&gt;optional&lt;/code&gt; could also become literal. All three blockers were now resolved. Peter had fixed Boost.System, Andrzej had fixed Boost.Optional, and we contributed fixes to Boost.Variant2. &lt;strong&gt;There was no going back&lt;/strong&gt;: we could no longer dismiss the constexpr feature after three library maintainers had already done their part.&lt;/p&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
flowchart TD
    A[Boost.URL constexpr parsing] --&amp;gt; B[Boost.Optional]
    A --&amp;gt; C[Boost.Variant2]
    A --&amp;gt; D[Boost.System]
    B --&amp;gt; E[boost::optional constexpr]
    C --&amp;gt; F[boost::variant2::variant constexpr]
    D --&amp;gt; G[boost::system::result constexpr]
    D --&amp;gt; H[boost::system::error_code constexpr]
&lt;/div&gt;

&lt;details&gt;
  &lt;summary&gt;Cross-library commits for constexpr support&lt;/summary&gt;

  &lt;p&gt;&lt;strong&gt;Boost.URL&lt;/strong&gt; (&lt;a href=&quot;https://github.com/boostorg/url/pull/976&quot;&gt;PR #976&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/url/pull/981&quot;&gt;PR #981&lt;/a&gt;)&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/0a2c39f&quot;&gt;0a2c39f&lt;/a&gt; feat: constexpr URL parsing for C++20&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/b9db439&quot;&gt;b9db439&lt;/a&gt; build: remove -Wno-maybe-uninitialized from GCC flags (see &lt;a href=&quot;#the--wmaybe-uninitialized-problem&quot;&gt;below&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/59b4540&quot;&gt;59b4540&lt;/a&gt; fix: suppress GCC false-positive -Wmaybe-uninitialized in tuple_rule (see &lt;a href=&quot;#the--wmaybe-uninitialized-problem&quot;&gt;below&lt;/a&gt;)&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;Boost.Optional&lt;/strong&gt; (&lt;a href=&quot;https://github.com/boostorg/optional/issues/143&quot;&gt;issue #143&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/optional/pull/145&quot;&gt;PR #145&lt;/a&gt;)&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/optional/commit/3df2337&quot;&gt;3df2337&lt;/a&gt; make optional constexpr in C++14&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/optional/commit/046357c&quot;&gt;046357c&lt;/a&gt; add more robust constexpr support&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/optional/commit/88e2378&quot;&gt;88e2378&lt;/a&gt; add -Wmaybe-uninitialized pragma (see &lt;a href=&quot;#the--wmaybe-uninitialized-problem&quot;&gt;below&lt;/a&gt;)&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;Boost.Variant2&lt;/strong&gt; (&lt;a href=&quot;https://github.com/boostorg/variant2/pull/57&quot;&gt;PR #57&lt;/a&gt;)&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/variant2/commit/b6ce8ac&quot;&gt;b6ce8ac&lt;/a&gt; add missing -Wmaybe-uninitialized pragma (see &lt;a href=&quot;#the--wmaybe-uninitialized-problem&quot;&gt;below&lt;/a&gt;)&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;Boost.System&lt;/strong&gt; (&lt;a href=&quot;https://github.com/boostorg/system/issues/141&quot;&gt;issue #141&lt;/a&gt;)&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/system/commit/af53f17&quot;&gt;af53f17&lt;/a&gt; add constexpr to virtual functions on C++20 or later&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;error-handling-at-compile-time&quot;&gt;Error Handling at Compile Time&lt;/h2&gt;

&lt;p&gt;Boost.URL attaches &lt;a href=&quot;https://en.cppreference.com/w/cpp/utility/source_location&quot;&gt;source location&lt;/a&gt; information to error codes for better diagnostics at runtime. In a &lt;code&gt;constexpr&lt;/code&gt; context, &lt;code&gt;BOOST_CURRENT_LOCATION&lt;/code&gt; is not available, so the &lt;code&gt;BOOST_URL_CONSTEXPR_RETURN_EC&lt;/code&gt; macro branches on &lt;a href=&quot;https://en.cppreference.com/w/cpp/types/is_constant_evaluated&quot;&gt;&lt;code&gt;__builtin_is_constant_evaluated()&lt;/code&gt;&lt;/a&gt;: at compile time it returns the error enum directly, at runtime it attaches the source location.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;#if defined(BOOST_URL_HAS_CXX20_CONSTEXPR)
# define BOOST_URL_CONSTEXPR_RETURN_EC(ev) \
    do { \
        if (__builtin_is_constant_evaluated()) { \
            return (ev); \
        } \
        return [](auto e) { \
            BOOST_URL_RETURN_EC(e); \
        }(ev); \
    } while(0)
#endif
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;the--wmaybe-uninitialized-problem&quot;&gt;The &lt;code&gt;-Wmaybe-uninitialized&lt;/code&gt; Problem&lt;/h2&gt;

&lt;p&gt;GCC’s &lt;a href=&quot;https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wmaybe-uninitialized&quot;&gt;&lt;code&gt;-Wmaybe-uninitialized&lt;/code&gt;&lt;/a&gt; flagged code inside &lt;code&gt;boost::optional&lt;/code&gt; and &lt;code&gt;boost::variant2&lt;/code&gt; union storage constructors. The root cause was neither library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The inlining chain:&lt;/strong&gt; Boost.URL’s parsing code constructs a &lt;code&gt;variant2::variant&lt;/code&gt; that contains an &lt;code&gt;optional&lt;/code&gt; alternative. At &lt;strong&gt;&lt;code&gt;-O3&lt;/code&gt;&lt;/strong&gt;, GCC inlines the entire chain:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Parse function&lt;/li&gt;
  &lt;li&gt;Variant construction&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;variant2&lt;/code&gt; storage&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;optional&lt;/code&gt; storage&lt;/li&gt;
  &lt;li&gt;Union constructor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After inlining, GCC sees a union with a &lt;code&gt;dummy_&lt;/code&gt; member and a &lt;code&gt;value_&lt;/code&gt; member, and it cannot prove which member is active. It conflates the “uninitialized dummy” path with the “initialized value” path. The &lt;code&gt;in_place_index_t&amp;lt;I&amp;gt;&lt;/code&gt; dispatch guarantees which member is initialized, but GCC’s data flow analysis loses track across the nested layers. &lt;a href=&quot;https://clang.llvm.org/docs/AddressSanitizer.html&quot;&gt;&lt;code&gt;-fsanitize=address&lt;/code&gt;&lt;/a&gt; makes it worse by changing inlining thresholds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The compiler blames the wrong library.&lt;/strong&gt; The root cause is in &lt;code&gt;variant2&lt;/code&gt;’s union storage, but when &lt;code&gt;variant2&lt;/code&gt; contains an &lt;code&gt;optional&lt;/code&gt;, GCC reports the warning in &lt;code&gt;optional&lt;/code&gt;’s code. The pragma has to go where GCC reports it, not where the issue originates. We contributed pragmas to both &lt;a href=&quot;https://github.com/boostorg/optional/pull/145&quot;&gt;Boost.Optional&lt;/a&gt; and &lt;a href=&quot;https://github.com/boostorg/variant2/pull/57&quot;&gt;Boost.Variant2&lt;/a&gt;, and replaced Boost.URL’s blanket &lt;code&gt;-Wno-maybe-uninitialized&lt;/code&gt; flag with &lt;a href=&quot;https://github.com/boostorg/url/pull/981&quot;&gt;targeted pragmas&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;This particular false positive requires &lt;strong&gt;GCC 14+&lt;/strong&gt;, &lt;strong&gt;&lt;code&gt;-O3&lt;/code&gt;&lt;/strong&gt;, &lt;strong&gt;ASan&lt;/strong&gt;, on &lt;strong&gt;x86_64 Linux&lt;/strong&gt;, with a &lt;code&gt;variant2::variant&lt;/code&gt; containing a &lt;code&gt;boost::optional&lt;/code&gt;, constructed through a &lt;code&gt;system::result&lt;/code&gt; dereference. Change any one of those conditions and the warning disappears.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This leaves an open question for the Boost ecosystem: when a false positive surfaces because library A’s optimizer behavior interacts with library B’s union storage and gets reported in library C’s code, who is responsible for the pragma? For now, we placed pragmas where GCC reports the issue, but the underlying problem recurs every time a new combination of types triggers the same inlining pattern.&lt;/p&gt;

&lt;h2 id=&quot;the-shared-library-problem&quot;&gt;The Shared Library Problem&lt;/h2&gt;

&lt;p&gt;Making URL parsing &lt;code&gt;constexpr&lt;/code&gt; means the parsing functions must be available in headers. But Boost.URL is a compiled library, and on MSVC, &lt;a href=&quot;https://learn.microsoft.com/en-us/cpp/cpp/dllexport-dllimport?view=msvc-170&quot;&gt;&lt;code&gt;__declspec(dllexport)&lt;/code&gt;&lt;/a&gt; on a class exports &lt;strong&gt;all&lt;/strong&gt; members, including inline and &lt;code&gt;constexpr&lt;/code&gt; ones. This causes &lt;a href=&quot;https://learn.microsoft.com/en-us/cpp/error-messages/tool-errors/linker-tools-error-lnk2005?view=msvc-170&quot;&gt;&lt;code&gt;LNK2005&lt;/code&gt;&lt;/a&gt; (duplicate symbol) errors for any class that mixes compiled and header-only members.&lt;/p&gt;

&lt;p&gt;Each class must follow exactly one of two policies:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;(a)&lt;/strong&gt; Fully compiled: &lt;code&gt;class BOOST_URL_DECL C&lt;/code&gt;. All members in &lt;code&gt;.cpp&lt;/code&gt; files. No inline or &lt;code&gt;constexpr&lt;/code&gt; members.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;(b)&lt;/strong&gt; Fully header-only: &lt;code&gt;class BOOST_SYMBOL_VISIBLE C&lt;/code&gt;. All inline/&lt;code&gt;constexpr&lt;/code&gt;/template. No &lt;code&gt;.cpp&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We documented the full rationale in &lt;a href=&quot;https://github.com/boostorg/url/blob/develop/include/boost/url/detail/config.hpp&quot;&gt;&lt;code&gt;config.hpp&lt;/code&gt;&lt;/a&gt;. We suspect other C++ libraries have not encountered this because they either do not test shared library builds as extensively as we do, or they are header-only.&lt;/p&gt;

&lt;h2 id=&quot;the-result&quot;&gt;The Result&lt;/h2&gt;

&lt;p&gt;Boost.URL can now parse URLs at compile time under C++20 (&lt;a href=&quot;https://github.com/boostorg/url/pull/976&quot;&gt;PR #976&lt;/a&gt;). All parse functions (&lt;code&gt;parse_uri&lt;/code&gt;, &lt;code&gt;parse_uri_reference&lt;/code&gt;, &lt;code&gt;parse_relative_ref&lt;/code&gt;, &lt;code&gt;parse_absolute_uri&lt;/code&gt;, and &lt;code&gt;parse_origin_form&lt;/code&gt;) are fully &lt;code&gt;constexpr&lt;/code&gt;. A malformed URL literal becomes a compile error rather than a runtime failure:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Parsed and validated at compile time.
// A malformed literal would fail to compile.
constexpr url_view api_base =
    parse_uri(&quot;https://api.example.com/v2&quot;).value();
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Pre-parsed &lt;code&gt;constexpr&lt;/code&gt; URL views also serve as &lt;strong&gt;zero-cost constants&lt;/strong&gt;: because all parsing happens during compilation, components like scheme, host, and port are available at runtime with no parsing overhead. This is useful for applications that compare against well-known endpoints, pre-populate configuration defaults, or build routing tables without paying for string parsing at startup.&lt;/p&gt;

&lt;p&gt;The constexpr feature taught us that dismissing a request because the cost seems too high for one library misses the bigger picture. Once Peter Dimov and the other maintainers got involved, the cost was shared and the scope shrank. In the Boost ecosystem, a feature that seems expensive in isolation can become practical when the dependencies cooperate.&lt;/p&gt;

&lt;h1 id=&quot;usability-improvements&quot;&gt;Usability Improvements&lt;/h1&gt;

&lt;p&gt;While integrating Boost.URL into &lt;a href=&quot;https://github.com/cppalliance/beast2&quot;&gt;Boost.Beast2&lt;/a&gt;, the Beast2 authors noticed friction in common operations that worked correctly but required more code than they should. At the same time, several community issues had been open for a while. We used this as an opportunity to address both.&lt;/p&gt;

&lt;h2 id=&quot;convenience-functions&quot;&gt;Convenience Functions&lt;/h2&gt;

&lt;p&gt;The most requested feature was &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/953&quot;&gt;&lt;code&gt;get_or&lt;/code&gt;&lt;/a&gt;&lt;/strong&gt; for query containers: look up a query parameter by key and return a default value if it is not present.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;auto it = url.params().find(&quot;page&quot;);
auto page = it != url.params().end() ? (*it).value : &quot;1&quot;;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;auto page = url.params().get_or(&quot;page&quot;, &quot;1&quot;);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We also added &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/952&quot;&gt;standalone decode functions&lt;/a&gt;&lt;/strong&gt; for working with individual URL components without constructing a full URL object:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;auto plain = decode(&quot;My%20Stuff&quot;);
assert(plain &amp;amp;&amp;amp; *plain == &quot;My Stuff&quot;);

auto n = decoded_size(&quot;Program%20Files&quot;);
assert(n &amp;amp;&amp;amp; *n == 13);
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;c20-integration&quot;&gt;C++20 Integration&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/966&quot;&gt;&lt;code&gt;enable_borrowed_range&lt;/code&gt;&lt;/a&gt; is now specialized for 10 Boost.URL view types (&lt;code&gt;segments_view&lt;/code&gt;, &lt;code&gt;params_view&lt;/code&gt;, &lt;code&gt;decode_view&lt;/code&gt;, and others). Unlike a &lt;code&gt;std::vector&lt;/code&gt;, which owns its data, Boost.URL views point into the URL’s buffer without owning it. When a temporary view is destroyed, its iterators still point to valid memory. &lt;a href=&quot;https://en.cppreference.com/w/cpp/ranges/borrowed_range&quot;&gt;&lt;code&gt;enable_borrowed_range&lt;/code&gt;&lt;/a&gt; tells the compiler this is safe, so algorithms like &lt;a href=&quot;https://en.cppreference.com/w/cpp/algorithm/ranges/find&quot;&gt;&lt;code&gt;std::ranges::find&lt;/code&gt;&lt;/a&gt; can return iterators from temporary views without the compiler rejecting the code:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;segments_view::iterator it;
{
    segments_view ps(&quot;/path/to/file.txt&quot;);
    it = ps.begin();
}
// iterator is still valid (points to external buffer)
assert(*it == &quot;path&quot;);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The grammar system gained &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/950&quot;&gt;user-provided RangeRule support&lt;/a&gt;&lt;/strong&gt;. Custom grammar rules for parsing URL components satisfy a concept requiring &lt;code&gt;first()&lt;/code&gt; and &lt;code&gt;next()&lt;/code&gt; methods returning &lt;code&gt;system::result&amp;lt;value_type&amp;gt;&lt;/code&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct my_range_rule
{
    using value_type = core::string_view;

    system::result&amp;lt;value_type&amp;gt;
    first(char const*&amp;amp; it, char const* end) const noexcept;

    system::result&amp;lt;value_type&amp;gt;
    next(char const*&amp;amp; it, char const* end) const noexcept;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The motivation was performance and API clarity (&lt;a href=&quot;https://github.com/boostorg/url/issues/943&quot;&gt;#943&lt;/a&gt;). Previously, &lt;code&gt;grammar::range&amp;lt;T&amp;gt;&lt;/code&gt; always type-erased the rule through a &lt;code&gt;recycled_ptr&lt;/code&gt; with string storage. &lt;strong&gt;Stateless rules were paying for storage they did not need.&lt;/strong&gt; With user-provided RangeRule, &lt;code&gt;range&amp;lt;T, RangeRule&amp;gt;&lt;/code&gt; detects empty rules and avoids the type-erasure overhead entirely.&lt;/p&gt;

&lt;h2 id=&quot;performance&quot;&gt;Performance&lt;/h2&gt;

&lt;p&gt;Component offsets in &lt;code&gt;url_impl&lt;/code&gt; changed from &lt;code&gt;size_t&lt;/code&gt; to &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/969&quot;&gt;&lt;code&gt;uint32_t&lt;/code&gt;&lt;/a&gt;&lt;/strong&gt;, reducing the size of every URL object on 64-bit platforms. The maximum URL size is capped at &lt;code&gt;UINT32_MAX - 1&lt;/code&gt; (enforced by a &lt;code&gt;static_assert&lt;/code&gt;). Constructing a &lt;code&gt;segments_view&lt;/code&gt; or &lt;code&gt;segments_encoded_view&lt;/code&gt; from a URL is now a &lt;strong&gt;constant-time operation&lt;/strong&gt;: offsets are computed directly from iterator indices without scanning the path.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Other improvements&lt;/summary&gt;

  &lt;p&gt;&lt;strong&gt;Fixes&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/a87998a&quot;&gt;a87998a&lt;/a&gt; &lt;code&gt;params_iter_impl::decrement()&lt;/code&gt; computed incorrect decoded key/value sizes when a query parameter’s value contains literal &lt;code&gt;=&lt;/code&gt; characters (&lt;a href=&quot;https://github.com/boostorg/url/pull/978&quot;&gt;PR #978&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/url/issues/972&quot;&gt;#972&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/60c281a&quot;&gt;60c281a&lt;/a&gt; &lt;code&gt;decode_view::remove_prefix&lt;/code&gt;/&lt;code&gt;remove_suffix&lt;/code&gt; asserted &lt;code&gt;n &amp;lt;= size()&lt;/code&gt; instead of preventing undefined behavior (&lt;a href=&quot;https://github.com/boostorg/url/pull/978&quot;&gt;PR #978&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/url/issues/973&quot;&gt;#973&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/01e0571&quot;&gt;01e0571&lt;/a&gt; &lt;code&gt;decode_view&lt;/code&gt; was forward-declared but not complete when &lt;code&gt;pct_string_view::operator*()&lt;/code&gt; was declared (&lt;a href=&quot;https://github.com/boostorg/url/pull/963&quot;&gt;PR #963&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/cbaf493&quot;&gt;cbaf493&lt;/a&gt; &lt;code&gt;parse_query&lt;/code&gt; guard for empty &lt;code&gt;string_view&lt;/code&gt; inputs from null data (&lt;a href=&quot;https://github.com/boostorg/url/pull/949&quot;&gt;PR #949&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/161cf73&quot;&gt;161cf73&lt;/a&gt; example router is now move-only (&lt;a href=&quot;https://github.com/boostorg/url/pull/959&quot;&gt;PR #959&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/13f0110&quot;&gt;13f0110&lt;/a&gt; natvis: add visualizers for segments (&lt;a href=&quot;https://github.com/boostorg/url/pull/962&quot;&gt;PR #962&lt;/a&gt;)&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;Refactors&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/e809ee4&quot;&gt;e809ee4&lt;/a&gt; &lt;code&gt;token_rule_t&lt;/code&gt; now uses the &lt;a href=&quot;https://en.cppreference.com/w/cpp/language/ebo&quot;&gt;empty base optimization&lt;/a&gt; via &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/core/doc/html/core/empty_value.html&quot;&gt;&lt;code&gt;empty_value&lt;/code&gt;&lt;/a&gt; and provides conditional default construction (&lt;a href=&quot;https://github.com/boostorg/url/pull/964&quot;&gt;PR #964&lt;/a&gt;)&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/32c3ddc&quot;&gt;32c3ddc&lt;/a&gt; new &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/987&quot;&gt;design rationale page&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/000476c&quot;&gt;000476c&lt;/a&gt; restore library-detail.adoc with shorter description&lt;/li&gt;
    &lt;li&gt;Legacy QuickBook documentation removed in favor of Antora-based docs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/8c7c4c7&quot;&gt;8c7c4c7&lt;/a&gt; &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/970&quot;&gt;plus scheme convention&lt;/a&gt;&lt;/strong&gt; documented&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/6d396a4&quot;&gt;6d396a4&lt;/a&gt; format examples show full URL&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/e4e6644&quot;&gt;e4e6644&lt;/a&gt; SVG diagrams with medium brightness backgrounds&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/c93553c&quot;&gt;c93553c&lt;/a&gt; simplify SVG documentation images&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/e618e69&quot;&gt;e618e69&lt;/a&gt; avoid shadow warnings while improving param_view docs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/4f63aea&quot;&gt;4f63aea&lt;/a&gt; antora-downloads-extension integration&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/7f08ce2&quot;&gt;7f08ce2&lt;/a&gt; update antora extensions&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/67bcd2d&quot;&gt;67bcd2d&lt;/a&gt; build script sets root dirs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/888cd8c&quot;&gt;888cd8c&lt;/a&gt; &lt;strong&gt;&lt;a href=&quot;https://github.com/boostorg/url/pull/951&quot;&gt;MrDocs-generated tagfiles&lt;/a&gt;&lt;/strong&gt; for cross-referencing with other Boost libraries&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;&lt;strong&gt;Tests&lt;/strong&gt;&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/e946887&quot;&gt;e946887&lt;/a&gt; URL with &lt;code&gt;?&lt;/code&gt; in query string (&lt;a href=&quot;https://github.com/boostorg/url/pull/978&quot;&gt;PR #978&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/url/issues/926&quot;&gt;#926&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/url/commit/3228399&quot;&gt;3228399&lt;/a&gt; URL natvis instantiations&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;Most of these improvements came from real usage. The Beast2 integration exposed friction that we would not have found from inside the library, and the community issues represented patterns that multiple users had independently hit. The best usability feedback comes from people who are actually building something with the library.&lt;/p&gt;

&lt;h1 id=&quot;acknowledgments-and-reflections&quot;&gt;Acknowledgments and Reflections&lt;/h1&gt;

&lt;p&gt;The constexpr work benefited from the contributions of &lt;strong&gt;&lt;a href=&quot;https://github.com/pdimov&quot;&gt;Peter Dimov&lt;/a&gt;&lt;/strong&gt; (&lt;a href=&quot;https://github.com/boostorg/system&quot;&gt;Boost.System&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/variant2&quot;&gt;Boost.Variant2&lt;/a&gt;) and &lt;strong&gt;&lt;a href=&quot;https://github.com/akrzemi1&quot;&gt;Andrzej Krzemieński&lt;/a&gt;&lt;/strong&gt; (&lt;a href=&quot;https://github.com/boostorg/optional&quot;&gt;Boost.Optional&lt;/a&gt;), who applied fixes to their libraries so that Boost.URL could proceed. The Beast2 usability feedback came from the Beast2 authors as they integrated Boost.URL into the new design.&lt;/p&gt;

&lt;p&gt;The work on Boost.URL has shifted. The problems we are solving now (edge cases found by professional auditors, compiler limitations for constexpr, usability friction from real integrations) are different from the problems we used to solve. They are smaller and more specific, but they matter more because real people hit them.&lt;/p&gt;

&lt;p&gt;The complete set of changes is available in the &lt;a href=&quot;https://github.com/boostorg/url&quot;&gt;Boost.URL repository&lt;/a&gt;.&lt;/p&gt;</content><author><name></name></author><category term="alan" /><summary type="html">We had been putting off the Boost.URL security review for a while. There was always something more urgent. When the review finally happened, it confirmed what we hoped: the core parsing logic held up well. Around the same time, a constexpr feature request that we had been dismissing suddenly became a cross-library collaboration when other Boost maintainers started applying changes to their own libraries. And while working on Boost.Beast2 integration, we noticed friction in common URL operations that led us to clear a backlog of usability improvements. Security Review Round 1: 1,207 Findings (February 2, 2026) Round 2: 27 Findings (February 17, 2026) Round 3: 15 Findings (April 2, 2026) Compile-Time URL Parsing The Conversation That Changed Everything Error Handling at Compile Time The -Wmaybe-uninitialized Problem The Shared Library Problem The Result Usability Improvements Convenience Functions C++20 Integration Performance Acknowledgments and Reflections Security Review The C++ Alliance arranges professional security audits for the libraries we maintain. The results for Boost.Beast (2020) and Boost.JSON (2021) are publicly available. For Boost.URL, we always had the plan but kept delaying because there was so much other work to do first. That delay turned out to be a good thing: we found and fixed issues ourselves first, so the reviewers could focus on the subtle problems. Laurel Lye Systems Engineering conducted three rounds of assessment. Each finding was manually reviewed against the source code and categorized as a confirmed bug (fixed), a false positive, or a deliberate design choice. For every confirmed bug, we also proposed new test cases to prevent regressions. Round 1: 1,207 Findings (February 2, 2026) The first assessment was the broadest. Of 1,207 findings, 15 were confirmed bugs resulting in fix commits. The vast majority were false positives or by-design patterns: Verdict CRITICAL HIGH MEDIUM LOW INFO Total FIXED 1 9 0 2 3 15 FALSE POSITIVE 3 47 46 186 110 392 BY DESIGN 0 129 445 170 56 800 Total 4 185 491 358 169 1,207 The single CRITICAL fix was a loop condition in url_base that dereferenced *it before checking it != end. Three other CRITICAL findings were false positives: the audit flagged raw-pointer writes in the format engine, but these use a two-phase measure/format design that guarantees the buffer is pre-sized correctly. Most false positives fell into recognizable themes: BOOST_ASSERT as sole bounds check (29 HIGH findings): internal _unsafe functions rely on preconditions validated by the public API. The _unsafe suffix signals the contract. This is the standard Boost/STL pattern (std::vector::operator[] vs at()). Non-owning view lifetime safety (27 HIGH findings): string_view and url_view types do not own their data. The audit flagged potential use-after-free, but lifetime management is the caller’s responsibility by design. Atomic reference counting (multiple findings across all rounds): the audit tool did not recognize the #ifdef BOOST_URL_DISABLE_THREADS guard that switches between std::atomic&amp;lt;std::size_t&amp;gt; and plain std::size_t. Round 1 fix commits bcdc891 CRITICAL: url_base loop condition order ec15fce HIGH: encode() UB pointer arithmetic for small buffers 81fcb95 HIGH: LLONG_MIN negation UB in format 42c8fe7 HIGH: ci_less::operator() return type 76279f5 HIGH: incorrect noexcept in segments_base::front() and back() d4ae92d HIGH: recycled_ptr::get() nullptr when empty 8d98fe6 LOW: decode() noexcept on throwing template The proportion of false positives to confirmed bugs was large enough that we discussed a second round with Laurel Lye, where we shared the false positive categories we had identified so they could be more targeted. Round 2: 27 Findings (February 17, 2026) The second assessment was more targeted. The reviewers had learned from our Round 1 triage and produced fewer false positives: Verdict HIGH MEDIUM LOW INFO Total FIXED 7 3 1 1 12 FALSE POSITIVE 2 2 0 0 4 BY DESIGN 0 0 1 1 2 ALREADY FIXED 0 5 4 0 9 Total 9 10 6 2 27 9 of the 27 findings had already been fixed in Round 1 commits. The new confirmed bugs included a heap overflow in format center-alignment padding (lpad = w / 2 used total width instead of padding amount), an infinite loop in decode_view::ends_with with empty strings, and an OOB read in ci_is_less on mismatched-length strings. Both rounds are tracked in PR #982. Round 2 fix commits d06df88 HIGH: format center-alignment padding 4fe2438 HIGH: decode_view::ends_with with empty string f5727ed HIGH: stale pattern n.path after colon-encoding d045d71 HIGH: ci_is_less OOB read 88efbae HIGH: recycled_ptr copy self-assignment fe4bdf6 MEDIUM: url move self-assignment ab5d812 MEDIUM: encode_one signed char right-shift b662a8f MEDIUM: encode() noexcept on throwing template 5bc52ed LOW: port_rule has_number for port zero at end of input 9c9850f INFO: ci_equal arguments by const reference 4f466ce test: public interface boundary and fuzz tests Round 3: 15 Findings (April 2, 2026) The third round was the shortest and the most precise. Of 15 findings, 4 were confirmed bugs and 11 were false positives. No CRITICAL findings. The false positives were the same recurring themes (atomic refcounting, pre-validated format strings, preconditions guaranteed by callers). Verdict HIGH MEDIUM LOW Total FIXED 0 1 3 4 FALSE POSITIVE 4 6 1 11 Total 4 7 4 15 The confirmed bugs were more subtle: a decoded-length calculation error in segments_iter_impl::decrement() that only manifested during backward iteration over percent-encoded paths, two noexcept specifications on functions that allocate std::string (which can throw bad_alloc), and a memcpy with null source when size is zero (undefined behavior per the C standard, even though it copies nothing). This round is tracked in PR #988. Round 3 fix commits 3ca2d71 MEDIUM: segments_iter_impl decoded-length in decrement() b1f6f8e LOW: param noexcept on throwing constructor d42c748 LOW: string_view_base noexcept on throwing operator std::string() f963383 LOW: url_view memcpy with null source when size is zero The progression from 1,207 findings to 27 to 15 shows the reviewers learning the peculiarities of our codebase. The ratio of false positives dropped with each round, and the confirmed bugs got more subtle. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% mindmap root((Confirmed Bugs)) UB in edge cases encode_one right-shift LLONG_MIN negation pointer arithmetic Self-assignment url move recycled_ptr copy OOB reads ci_is_less decode_view ends_with Incorrect noexcept encode / decode segments_base front/back param constructor string_view_base operator Iterator bugs segments decoded-length Null pointer recycled_ptr get url_view memcpy Compile-Time URL Parsing constexpr URL parsing has been one of the most recurring requests since the library’s inception. Every few months someone would ask about it, and every few months we would decide the refactoring cost was too high. The parsing engine is heavily buffer-oriented, and moving enough code into headers for constexpr evaluation required careful refactoring without breaking the shared library build. When we finally prototyped it, the diff touched thousands of lines, but most of those were code being moved from source files to headers rather than new logic. The actual new code was limited to alternative code paths to bypass non-literal types and refactoring url_view_base to eliminate a self-referencing pointer that prevented constexpr evaluation. Still, given the size of the change, we initially marked it as unactionable and moved on to the security review. Beyond the refactoring cost, we had blockers beyond our control. Our parsing code depended on boost::optional (not a literal type, no constexpr constructors), boost::variant2 (not literal when containing optional), and boost::system::result (could not be constructed with a custom error_code in constexpr because error_category::failed() is virtual). Without changes to those libraries, constexpr URL parsing was not possible regardless of how much we refactored our own code. The Conversation That Changed Everything Then Peter Dimov, the maintainer of Boost.System and Boost.Variant2, joined the conversation. We had assumed that system::result&amp;lt;T&amp;gt; could not be constexpr in C++14 because it wraps error_code, which uses virtual functions. Peter pointed out that system::result&amp;lt;T&amp;gt; is already a literal type in C++14 when T is literal and the error code is not custom. Boost.URL uses a custom error code category, and constructing a result from a custom error_code requires calling error_category::failed(), which is virtual and therefore not constexpr before C++20. Peter offered to fix this in Boost.System (#141, af53f17) for C++20 so that custom error codes would also work at compile time. Allowing constexpr virtual functions in C++20 Peter Dimov is also one of the authors of P1064: “Allowing Virtual Function Calls in Constant Expressions”, the C++ committee proposal that made constexpr virtual functions possible in C++20. The paper uses error_code and error_category as the motivating example. That shifted the problem. Instead of building our own constexpr_result&amp;lt;T&amp;gt; type to bypass the entire error handling system, we could use system::result directly in C++20. The scope of the refactoring shrank, and we focused on C++20 as the initial target. The remaining blocker was that system::result&amp;lt;T&amp;gt; requires T to be a literal type, and we use boost::optional heavily in our parsing code. boost::optional was not a literal type. Andrzej Krzemieński, the Boost.Optional maintainer, started working on it. The conversation went back and forth on the C++14 constraints: std::addressof is not constexpr until C++17, mandatory copy elision is only available in C++17, and there were questions about what subset of constructors could realistically become constexpr in C++14. After several iterations (including a feature/constexpr branch), the constexpr implementation landed on develop. With optional becoming literal, boost::variant2 containing optional could also become literal. All three blockers were now resolved. Peter had fixed Boost.System, Andrzej had fixed Boost.Optional, and we contributed fixes to Boost.Variant2. There was no going back: we could no longer dismiss the constexpr feature after three library maintainers had already done their part. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% flowchart TD A[Boost.URL constexpr parsing] --&amp;gt; B[Boost.Optional] A --&amp;gt; C[Boost.Variant2] A --&amp;gt; D[Boost.System] B --&amp;gt; E[boost::optional constexpr] C --&amp;gt; F[boost::variant2::variant constexpr] D --&amp;gt; G[boost::system::result constexpr] D --&amp;gt; H[boost::system::error_code constexpr] Cross-library commits for constexpr support Boost.URL (PR #976, PR #981) 0a2c39f feat: constexpr URL parsing for C++20 b9db439 build: remove -Wno-maybe-uninitialized from GCC flags (see below) 59b4540 fix: suppress GCC false-positive -Wmaybe-uninitialized in tuple_rule (see below) Boost.Optional (issue #143, PR #145) 3df2337 make optional constexpr in C++14 046357c add more robust constexpr support 88e2378 add -Wmaybe-uninitialized pragma (see below) Boost.Variant2 (PR #57) b6ce8ac add missing -Wmaybe-uninitialized pragma (see below) Boost.System (issue #141) af53f17 add constexpr to virtual functions on C++20 or later Error Handling at Compile Time Boost.URL attaches source location information to error codes for better diagnostics at runtime. In a constexpr context, BOOST_CURRENT_LOCATION is not available, so the BOOST_URL_CONSTEXPR_RETURN_EC macro branches on __builtin_is_constant_evaluated(): at compile time it returns the error enum directly, at runtime it attaches the source location. #if defined(BOOST_URL_HAS_CXX20_CONSTEXPR) # define BOOST_URL_CONSTEXPR_RETURN_EC(ev) \ do { \ if (__builtin_is_constant_evaluated()) { \ return (ev); \ } \ return [](auto e) { \ BOOST_URL_RETURN_EC(e); \ }(ev); \ } while(0) #endif The -Wmaybe-uninitialized Problem GCC’s -Wmaybe-uninitialized flagged code inside boost::optional and boost::variant2 union storage constructors. The root cause was neither library. The inlining chain: Boost.URL’s parsing code constructs a variant2::variant that contains an optional alternative. At -O3, GCC inlines the entire chain: Parse function Variant construction variant2 storage optional storage Union constructor After inlining, GCC sees a union with a dummy_ member and a value_ member, and it cannot prove which member is active. It conflates the “uninitialized dummy” path with the “initialized value” path. The in_place_index_t&amp;lt;I&amp;gt; dispatch guarantees which member is initialized, but GCC’s data flow analysis loses track across the nested layers. -fsanitize=address makes it worse by changing inlining thresholds. The compiler blames the wrong library. The root cause is in variant2’s union storage, but when variant2 contains an optional, GCC reports the warning in optional’s code. The pragma has to go where GCC reports it, not where the issue originates. We contributed pragmas to both Boost.Optional and Boost.Variant2, and replaced Boost.URL’s blanket -Wno-maybe-uninitialized flag with targeted pragmas. This particular false positive requires GCC 14+, -O3, ASan, on x86_64 Linux, with a variant2::variant containing a boost::optional, constructed through a system::result dereference. Change any one of those conditions and the warning disappears. This leaves an open question for the Boost ecosystem: when a false positive surfaces because library A’s optimizer behavior interacts with library B’s union storage and gets reported in library C’s code, who is responsible for the pragma? For now, we placed pragmas where GCC reports the issue, but the underlying problem recurs every time a new combination of types triggers the same inlining pattern. The Shared Library Problem Making URL parsing constexpr means the parsing functions must be available in headers. But Boost.URL is a compiled library, and on MSVC, __declspec(dllexport) on a class exports all members, including inline and constexpr ones. This causes LNK2005 (duplicate symbol) errors for any class that mixes compiled and header-only members. Each class must follow exactly one of two policies: (a) Fully compiled: class BOOST_URL_DECL C. All members in .cpp files. No inline or constexpr members. (b) Fully header-only: class BOOST_SYMBOL_VISIBLE C. All inline/constexpr/template. No .cpp file. We documented the full rationale in config.hpp. We suspect other C++ libraries have not encountered this because they either do not test shared library builds as extensively as we do, or they are header-only. The Result Boost.URL can now parse URLs at compile time under C++20 (PR #976). All parse functions (parse_uri, parse_uri_reference, parse_relative_ref, parse_absolute_uri, and parse_origin_form) are fully constexpr. A malformed URL literal becomes a compile error rather than a runtime failure: // Parsed and validated at compile time. // A malformed literal would fail to compile. constexpr url_view api_base = parse_uri(&quot;https://api.example.com/v2&quot;).value(); Pre-parsed constexpr URL views also serve as zero-cost constants: because all parsing happens during compilation, components like scheme, host, and port are available at runtime with no parsing overhead. This is useful for applications that compare against well-known endpoints, pre-populate configuration defaults, or build routing tables without paying for string parsing at startup. The constexpr feature taught us that dismissing a request because the cost seems too high for one library misses the bigger picture. Once Peter Dimov and the other maintainers got involved, the cost was shared and the scope shrank. In the Boost ecosystem, a feature that seems expensive in isolation can become practical when the dependencies cooperate. Usability Improvements While integrating Boost.URL into Boost.Beast2, the Beast2 authors noticed friction in common operations that worked correctly but required more code than they should. At the same time, several community issues had been open for a while. We used this as an opportunity to address both. Convenience Functions The most requested feature was get_or for query containers: look up a query parameter by key and return a default value if it is not present. Before: auto it = url.params().find(&quot;page&quot;); auto page = it != url.params().end() ? (*it).value : &quot;1&quot;; After: auto page = url.params().get_or(&quot;page&quot;, &quot;1&quot;); We also added standalone decode functions for working with individual URL components without constructing a full URL object: auto plain = decode(&quot;My%20Stuff&quot;); assert(plain &amp;amp;&amp;amp; *plain == &quot;My Stuff&quot;); auto n = decoded_size(&quot;Program%20Files&quot;); assert(n &amp;amp;&amp;amp; *n == 13); C++20 Integration enable_borrowed_range is now specialized for 10 Boost.URL view types (segments_view, params_view, decode_view, and others). Unlike a std::vector, which owns its data, Boost.URL views point into the URL’s buffer without owning it. When a temporary view is destroyed, its iterators still point to valid memory. enable_borrowed_range tells the compiler this is safe, so algorithms like std::ranges::find can return iterators from temporary views without the compiler rejecting the code: segments_view::iterator it; { segments_view ps(&quot;/path/to/file.txt&quot;); it = ps.begin(); } // iterator is still valid (points to external buffer) assert(*it == &quot;path&quot;); The grammar system gained user-provided RangeRule support. Custom grammar rules for parsing URL components satisfy a concept requiring first() and next() methods returning system::result&amp;lt;value_type&amp;gt;: struct my_range_rule { using value_type = core::string_view; system::result&amp;lt;value_type&amp;gt; first(char const*&amp;amp; it, char const* end) const noexcept; system::result&amp;lt;value_type&amp;gt; next(char const*&amp;amp; it, char const* end) const noexcept; }; The motivation was performance and API clarity (#943). Previously, grammar::range&amp;lt;T&amp;gt; always type-erased the rule through a recycled_ptr with string storage. Stateless rules were paying for storage they did not need. With user-provided RangeRule, range&amp;lt;T, RangeRule&amp;gt; detects empty rules and avoids the type-erasure overhead entirely. Performance Component offsets in url_impl changed from size_t to uint32_t, reducing the size of every URL object on 64-bit platforms. The maximum URL size is capped at UINT32_MAX - 1 (enforced by a static_assert). Constructing a segments_view or segments_encoded_view from a URL is now a constant-time operation: offsets are computed directly from iterator indices without scanning the path. Other improvements Fixes a87998a params_iter_impl::decrement() computed incorrect decoded key/value sizes when a query parameter’s value contains literal = characters (PR #978, #972) 60c281a decode_view::remove_prefix/remove_suffix asserted n &amp;lt;= size() instead of preventing undefined behavior (PR #978, #973) 01e0571 decode_view was forward-declared but not complete when pct_string_view::operator*() was declared (PR #963) cbaf493 parse_query guard for empty string_view inputs from null data (PR #949) 161cf73 example router is now move-only (PR #959) 13f0110 natvis: add visualizers for segments (PR #962) Refactors e809ee4 token_rule_t now uses the empty base optimization via empty_value and provides conditional default construction (PR #964) Documentation 32c3ddc new design rationale page 000476c restore library-detail.adoc with shorter description Legacy QuickBook documentation removed in favor of Antora-based docs 8c7c4c7 plus scheme convention documented 6d396a4 format examples show full URL e4e6644 SVG diagrams with medium brightness backgrounds c93553c simplify SVG documentation images e618e69 avoid shadow warnings while improving param_view docs 4f63aea antora-downloads-extension integration 7f08ce2 update antora extensions 67bcd2d build script sets root dirs 888cd8c MrDocs-generated tagfiles for cross-referencing with other Boost libraries Tests e946887 URL with ? in query string (PR #978, #926) 3228399 URL natvis instantiations Most of these improvements came from real usage. The Beast2 integration exposed friction that we would not have found from inside the library, and the community issues represented patterns that multiple users had independently hit. The best usability feedback comes from people who are actually building something with the library. Acknowledgments and Reflections The constexpr work benefited from the contributions of Peter Dimov (Boost.System, Boost.Variant2) and Andrzej Krzemieński (Boost.Optional), who applied fixes to their libraries so that Boost.URL could proceed. The Beast2 usability feedback came from the Beast2 authors as they integrated Boost.URL into the new design. The work on Boost.URL has shifted. The problems we are solving now (edge cases found by professional auditors, compiler limitations for constexpr, usability friction from real integrations) are different from the problems we used to solve. They are smaller and more specific, but they matter more because real people hit them. The complete set of changes is available in the Boost.URL repository.</summary></entry><entry><title type="html">MrDocs Bootstrap: One Script to Build Them All</title><link href="http://cppalliance.org/alan/2026/04/15/Alan.html" rel="alternate" type="text/html" title="MrDocs Bootstrap: One Script to Build Them All" /><published>2026-04-15T00:00:00+00:00</published><updated>2026-04-15T00:00:00+00:00</updated><id>http://cppalliance.org/alan/2026/04/15/Alan</id><content type="html" xml:base="http://cppalliance.org/alan/2026/04/15/Alan.html">&lt;p&gt;When new developers joined the &lt;a href=&quot;https://www.mrdocs.com&quot;&gt;MrDocs&lt;/a&gt; team, we expected the usual ramp-up: learning the codebase, understanding the architecture, and getting comfortable with the review process. What we did not expect was that &lt;strong&gt;building and testing the project&lt;/strong&gt; would be the hardest part. People dedicated to the project full-time spent weeks just trying to get a working build. Even when they succeeded, each person ended up with their own set of workarounds: a custom script here, a patched flag there, an undocumented environment variable somewhere else. One unrelated commit from someone else could silently break another developer’s local setup. And even after all of that, they didn’t know how to run the commands to test the project.&lt;/p&gt;

&lt;p&gt;As the complexity grew, we naturally reached for a &lt;strong&gt;package manager&lt;/strong&gt;. We adopted &lt;a href=&quot;https://vcpkg.io/&quot;&gt;vcpkg&lt;/a&gt;, but over time we discovered that our problem was too complex for what any package manager is designed to handle. The build type combinations, the sanitizer propagation, the cross-platform toolchain differences, and the IDE configurations: these are workflow problems that kept accumulating. That realization, combined with an onboarding crisis where new contributors could not build the project at all, led us to write our own &lt;strong&gt;bootstrap script&lt;/strong&gt;. The idea was not unfamiliar: at the &lt;a href=&quot;https://cppalliance.org/&quot;&gt;C++ Alliance&lt;/a&gt;, we work closely with the &lt;a href=&quot;https://www.boost.org/&quot;&gt;Boost&lt;/a&gt; libraries, and Boost has shipped a &lt;a href=&quot;https://github.com/boostorg/boost/blob/master/bootstrap.sh&quot;&gt;bootstrap script&lt;/a&gt; for years. We knew the pattern worked. We just needed to apply it to our own dependency problem.&lt;/p&gt;

&lt;p&gt;This post explains &lt;strong&gt;why robust C++ workflows are fundamentally difficult&lt;/strong&gt;, not only for dependency management but also for supporting multiple platforms, compilers, and testing configurations. It describes what we learned from our experience with vcpkg and how a bootstrap script solved the problem for MrDocs.&lt;/p&gt;

&lt;!-- prettier-ignore --&gt;
&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#why-dependency-management-is-hard&quot; id=&quot;markdown-toc-why-dependency-management-is-hard&quot;&gt;Why Dependency Management Is Hard&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#a-combinatorial-explosion&quot; id=&quot;markdown-toc-a-combinatorial-explosion&quot;&gt;A Combinatorial Explosion&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-c-makes-it-worse&quot; id=&quot;markdown-toc-why-c-makes-it-worse&quot;&gt;Why C++ Makes It Worse&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-went-wrong-for-mrdocs&quot; id=&quot;markdown-toc-what-went-wrong-for-mrdocs&quot;&gt;What Went Wrong for MrDocs&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#where-vcpkg-fell-short&quot; id=&quot;markdown-toc-where-vcpkg-fell-short&quot;&gt;Where vcpkg Fell Short&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-problems-no-package-manager-solves&quot; id=&quot;markdown-toc-the-problems-no-package-manager-solves&quot;&gt;The Problems No Package Manager Solves&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#five-workflows-and-counting&quot; id=&quot;markdown-toc-five-workflows-and-counting&quot;&gt;Five Workflows and Counting&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-bootstrap-script&quot; id=&quot;markdown-toc-the-bootstrap-script&quot;&gt;The Bootstrap Script&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#how-it-evolved&quot; id=&quot;markdown-toc-how-it-evolved&quot;&gt;How It Evolved&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#key-design-decisions&quot; id=&quot;markdown-toc-key-design-decisions&quot;&gt;Key Design Decisions&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-we-learned&quot; id=&quot;markdown-toc-what-we-learned&quot;&gt;What We Learned&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;why-dependency-management-is-hard&quot;&gt;Why Dependency Management Is Hard&lt;/h1&gt;

&lt;h2 id=&quot;a-combinatorial-explosion&quot;&gt;A Combinatorial Explosion&lt;/h2&gt;

&lt;p&gt;Suppose your project depends on Package A &amp;gt;=1.0 and Package B &amp;gt;=2.0, but all options where A &amp;gt;=1.0 require B &amp;lt;=1.5. You are stuck. With hundreds of packages, each with multiple versions and possibly conflicting or conditional dependencies, the problem &lt;strong&gt;explodes combinatorially&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is not hyperbole. Package dependency resolution is &lt;a href=&quot;https://research.swtch.com/version-sat&quot;&gt;&lt;strong&gt;NP-complete&lt;/strong&gt;&lt;/a&gt;. It reduces directly to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Boolean_satisfiability_problem&quot;&gt;Boolean satisfiability problem (SAT)&lt;/a&gt;. Each package version is a boolean variable, each dependency constraint is a clause, and finding an installable set is equivalent to finding a satisfying assignment. Real-world tools handle this with &lt;strong&gt;heuristics&lt;/strong&gt; (like &lt;a href=&quot;https://wiki.debian.org/Apt&quot;&gt;APT&lt;/a&gt; and &lt;a href=&quot;https://pip.pypa.io/en/stable/topics/dependency-resolution/&quot;&gt;pip&lt;/a&gt;) or outright &lt;strong&gt;SAT solvers&lt;/strong&gt; (like &lt;a href=&quot;https://github.com/rpm-software-management/libsolv&quot;&gt;libsolv&lt;/a&gt;, used by DNF and Zypper).&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In the worst case, finding a consistent set of dependency versions requires exponential time. Verifying one is polynomial, but discovering it may not be.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;admonition&quot;&gt;&lt;div class=&quot;admonition-title&quot;&gt;How other ecosystems hide this complexity&lt;/div&gt;
&lt;p&gt;Most users never notice this because package managers use &lt;strong&gt;tricks&lt;/strong&gt;. When &lt;a href=&quot;https://docs.npmjs.com/cli/v10/commands/npm-install#algorithm&quot;&gt;npm&lt;/a&gt; cannot satisfy all constraints, it installs &lt;strong&gt;multiple versions&lt;/strong&gt; of the same package in nested &lt;code&gt;node_modules&lt;/code&gt; directories, so both versions get bundled into the final application. &lt;a href=&quot;https://doc.rust-lang.org/cargo/reference/resolver.html#semver-compatibility&quot;&gt;Cargo&lt;/a&gt; does something similar: when two crates require SemVer-incompatible versions of the same dependency, it includes both in the build. Most users are not aware this is happening, and would probably not be happy about it if they were: bundling two versions of the same library increases binary size, can cause subtle bugs when types from different versions interact, and makes the dependency graph harder to reason about. In C++, the trick is not even available. You cannot link two versions of the same library into a single binary. When the constraints are unsatisfiable, there is no quiet fallback. You get a build error.&lt;/p&gt;
&lt;/div&gt;

&lt;h2 id=&quot;why-c-makes-it-worse&quot;&gt;Why C++ Makes It Worse&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;No standard package format&lt;/strong&gt;: unlike &lt;a href=&quot;https://www.npmjs.com/&quot;&gt;npm&lt;/a&gt;, &lt;a href=&quot;https://pypi.org/&quot;&gt;pip&lt;/a&gt;, or &lt;a href=&quot;https://crates.io/&quot;&gt;Cargo&lt;/a&gt;, C++ has no universal package format. Every dependency must be compiled with compatible settings, and pre-built binaries are the exception rather than the rule.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;ABI compatibility&lt;/strong&gt;: different compilers, compiler versions, and even compiler flags can produce &lt;strong&gt;incompatible binaries&lt;/strong&gt;. You cannot just link any two object files together.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;API compatibility&lt;/strong&gt;: header-only and compiled libraries have different concerns. Template instantiation happens at the consumer’s compile time, so a header-only library can break when the consumer’s compiler or standard library version changes, even if the library itself has not changed.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Categorical options&lt;/strong&gt;: choices like shared/static linking, exceptions on/off, and RTTI on/off need to be &lt;strong&gt;consistent across the entire dependency chain&lt;/strong&gt;. If one library is built with exceptions disabled and another expects them, you get subtle runtime failures or linker errors.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Viral flags&lt;/strong&gt;: some flags &lt;strong&gt;must propagate&lt;/strong&gt; to all dependencies, some &lt;strong&gt;must not&lt;/strong&gt;, and some are &lt;strong&gt;optional per dependency&lt;/strong&gt;. Build type is a good example of the nuance. You might want MrDocs in Debug, but building LLVM in Debug makes it too slow to use. Building LLVM in Release makes it too hard to debug when the bug is in LLVM. So you end up with combinations like “MrDocs in Debug, LLVM in Debug with optimization, everything else in Release.” The propagation decision can vary &lt;strong&gt;per dependency and per situation&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Viral macros&lt;/strong&gt;: preprocessor macros can create &lt;strong&gt;different binary versions&lt;/strong&gt; of the same library. If &lt;code&gt;spdlog&lt;/code&gt; depends on &lt;code&gt;fmt&lt;/code&gt; with a certain macro configuration, every other dependency on &lt;code&gt;fmt&lt;/code&gt; in the hierarchy has to use the same macro. A macro used in &lt;code&gt;fmt&lt;/code&gt; might affect the macros available in peer dependencies that define &lt;code&gt;fmt&lt;/code&gt; formatters. This creates constraints that propagate transitively through the dependency graph, and no package manager tracks macro configurations.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sanitizer propagation&lt;/strong&gt;: sanitizers deserve their own mention because they are not all equally viral. &lt;strong&gt;UndefinedBehaviorSanitizer&lt;/strong&gt; is the lightest: it relies on compile-time checks and can share the same dependency builds as the unsanitized configuration. &lt;a href=&quot;https://clang.llvm.org/docs/AddressSanitizer.html&quot;&gt;&lt;strong&gt;AddressSanitizer&lt;/strong&gt;&lt;/a&gt;, &lt;a href=&quot;https://clang.llvm.org/docs/MemorySanitizer.html&quot;&gt;&lt;strong&gt;MemorySanitizer&lt;/strong&gt;&lt;/a&gt;, and &lt;strong&gt;ThreadSanitizer&lt;/strong&gt; each need their own &lt;strong&gt;separately built LLVM&lt;/strong&gt; with instrumented dependencies. ASan and MSan go further: they also require an &lt;strong&gt;instrumented libc++&lt;/strong&gt; built as an LLVM runtime. MSan is the extreme case. It &lt;a href=&quot;https://clang.llvm.org/docs/MemorySanitizer.html#handling-external-code&quot;&gt;reports false positives on any uninstrumented code&lt;/a&gt;, so the entire chain has to be instrumented: first build the &lt;strong&gt;C++ standard library&lt;/strong&gt; with MSan, then build all dependencies against that instrumented standard library, then build MrDocs itself. That is three layers of builds with a single flag threading through all of them. No package manager models these propagation levels.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Build type incompatibility&lt;/strong&gt;: on MSVC, Debug and Release are &lt;a href=&quot;https://learn.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-170&quot;&gt;&lt;strong&gt;ABI-incompatible&lt;/strong&gt;&lt;/a&gt; at the CRT level. This means you cannot just build your dependencies in Release and your project in Debug for a faster development cycle. You need all of them on the same side of the Debug/Release boundary. A Debug build with optimization (“OptimizedDebug”) is structurally different from a Release build with debug symbols (“RelWithDebInfo”). The first uses the Debug CRT with &lt;code&gt;/O2&lt;/code&gt;; the second uses the Release CRT with debug info. Mixing them causes linker errors. This forces you into configurations that no standard build type represents.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Platform-specific toolchain setup&lt;/strong&gt;: each platform has its own way of locating and configuring compilers. On Linux, GCC and Clang are on &lt;code&gt;PATH&lt;/code&gt;. On macOS, Homebrew Clang installs toolchain components (&lt;code&gt;llvm-ar&lt;/code&gt;, &lt;code&gt;llvm-ranlib&lt;/code&gt;, &lt;code&gt;ld.lld&lt;/code&gt;) and its standard library (&lt;code&gt;libc++&lt;/code&gt;) in non-standard locations that differ from AppleClang’s. The headers and libraries are not on the default search path, so you have to pass their locations explicitly through compiler and linker flags for everything you compile. On Windows, MSVC does not live on &lt;code&gt;PATH&lt;/code&gt; at all: it requires environment variables set by &lt;a href=&quot;https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-170&quot;&gt;&lt;code&gt;vcvarsall.bat&lt;/code&gt;&lt;/a&gt;, and locating the correct Visual Studio installation requires &lt;a href=&quot;https://github.com/microsoft/vswhere&quot;&gt;&lt;code&gt;vswhere.exe&lt;/code&gt;&lt;/a&gt;. None of this is handled by package managers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Compiler and standard library combinations&lt;/strong&gt;: on Linux, Clang uses whatever &lt;code&gt;libstdc++&lt;/code&gt; is installed on the system rather than shipping its own. Ubuntu 24.04 ships GCC 13, but MrDocs needs GCC 14 features (like &lt;code&gt;&amp;lt;print&amp;gt;&lt;/code&gt;). So a developer using Clang 20 on a fresh Ubuntu machine gets build errors from the standard library, not from their own code. Testing every Clang version with every GCC’s &lt;code&gt;libstdc++&lt;/code&gt; is infeasible, but specific combinations matter, and the mismatch is not obvious to the developer when it happens.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Platform explosion&lt;/strong&gt;: Windows/Linux/macOS multiplied by Debug/Release/OptimizedDebug, GCC/Clang/MSVC/AppleClang, shared/static, and sanitizer variants creates a &lt;strong&gt;combinatorial explosion&lt;/strong&gt; of configurations that all need to be tested. Each platform also has its own quirks: git symlinks behave differently on Windows, Ninja availability varies, and even the way you specify compiler flags differs between MSVC and GCC/Clang.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Conditional dependencies&lt;/strong&gt;: in C++, build options frequently add or remove entire dependencies. An image processing library might support PNG, JPEG, and WebP, each requiring its own codec library. Enabling or disabling a format changes the dependency graph. Build scripts also commonly look for &lt;strong&gt;host dependencies&lt;/strong&gt; (system libraries for talking to the OS, GPU, or network) that you are not expected to build yourself but that must be present on the machine. The dependency graph is not static; it depends on the configuration.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Closed-source dependencies&lt;/strong&gt;: all of the problems above assume you have the source code and can rebuild with the correct flags. Sometimes you do not. When a dependency is distributed only as a pre-built binary, there is no way to adjust the ABI, propagate sanitizer flags, or change the build type. If it was compiled with incompatible settings, there is nothing you can do about it. It becomes a hard constraint on the entire system.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
mindmap
  root((C++ Dependencies))
    No Standard Format
      Built from source
      Closed-source binaries
    Compatibility
      ABI
      API / Templates
      Build Type / CRT
    Propagation
      Viral flags
      Viral macros
      Sanitizers
      Categorical options
    Dependencies
      Conditional on build options
      Host / system libraries
      Closed-source binaries
    Platform
      Toolchain setup
      Compiler + stdlib combos
      Combinatorial explosion
&lt;/div&gt;

&lt;p&gt;In C++, the general case involves so many dimensions that &lt;strong&gt;no existing tool handles all of them well&lt;/strong&gt;.&lt;/p&gt;

&lt;div class=&quot;admonition&quot;&gt;&lt;div class=&quot;admonition-title&quot;&gt;What about CPS?&lt;/div&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/cps-org/cps&quot;&gt;Common Package Specification (CPS)&lt;/a&gt; is an interesting effort to standardize how C++ packages are &lt;strong&gt;consumed&lt;/strong&gt;. A &lt;code&gt;.cps&lt;/code&gt; file describes everything a build system needs to find and link against an already-built package: include paths, library paths, compiler flags. This is valuable, but it operates at the &lt;strong&gt;point of consumption&lt;/strong&gt;, where we have already made all the decisions about platform, compiler, build type, and sanitizers. It assumes the dependency has already been built in a compatible way. It does not describe how to &lt;strong&gt;build&lt;/strong&gt; the dependency with the correct flags in the first place. For example, if we need AddressSanitizer, all dependencies must be built with ASan instrumentation. A CPS file tells us how to consume a package that was built with ASan, but it does not know how to rebuild that package with ASan if it was not. The problems described above are all about making those upstream decisions correctly, which happens before CPS enters the picture.&lt;/p&gt;
&lt;/div&gt;

&lt;h1 id=&quot;what-went-wrong-for-mrdocs&quot;&gt;What Went Wrong for MrDocs&lt;/h1&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs&quot;&gt;MrDocs&lt;/a&gt; depends on &lt;a href=&quot;https://llvm.org/&quot;&gt;&lt;strong&gt;LLVM&lt;/strong&gt;&lt;/a&gt;, &lt;a href=&quot;https://duktape.org/&quot;&gt;&lt;strong&gt;Duktape&lt;/strong&gt;&lt;/a&gt;, &lt;a href=&quot;https://www.lua.org/&quot;&gt;&lt;strong&gt;Lua&lt;/strong&gt;&lt;/a&gt;, and &lt;a href=&quot;https://gitlab.gnome.org/GNOME/libxml2&quot;&gt;&lt;strong&gt;libxml2&lt;/strong&gt;&lt;/a&gt; (and previously also &lt;a href=&quot;https://fmt.dev/&quot;&gt;&lt;strong&gt;fmt&lt;/strong&gt;&lt;/a&gt;). Over time, three categories of problems accumulated.&lt;/p&gt;

&lt;h2 id=&quot;where-vcpkg-fell-short&quot;&gt;Where vcpkg Fell Short&lt;/h2&gt;

&lt;p&gt;For over a year, we used &lt;a href=&quot;https://vcpkg.io/&quot;&gt;vcpkg&lt;/a&gt; to manage these dependencies. MrDocs is a tool, not a library, so we only needed vcpkg for acquiring our own dependencies rather than for making ourselves easy to consume downstream. It worked at first, but the complexity of our workflows gradually outgrew what vcpkg was designed to handle:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Build types&lt;/strong&gt;: MrDocs developers frequently need a Debug build with optimization enabled because the codebase is large enough that an unoptimized debug build is painfully slow. On MSVC, Debug and Release are ABI-incompatible, so a “Debug with optimization” configuration does not fit neatly into vcpkg’s &lt;strong&gt;Debug/Release binary model&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Patches and dual paths&lt;/strong&gt;: vcpkg applies patches to libraries that do not follow CMake conventions. This meant we had to support &lt;strong&gt;two ways&lt;/strong&gt; to find the same library: the vcpkg-patched version and the upstream version. When libraries do follow CMake conventions, we do not need vcpkg as much. But when they do not, the patches make vcpkg less useful rather than more. Contributors kept opening PRs proposing yet another way to locate a dependency. In a build script, &lt;strong&gt;every new path is expensive to test&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Rigid baseline&lt;/strong&gt;: vcpkg’s baseline model pins all libraries to a single snapshot. We are tightly coupled to a specific LLVM commit, so we could not use vcpkg for LLVM from the start. That alone meant vcpkg could only manage a subset of our dependencies. On top of that, when &lt;code&gt;fmt&lt;/code&gt; bumped a major version and broke downstream consumers, it showed that the baseline approach is too rigid for projects that use a few unrelated libraries. Sometimes the entire baseline would be updated and libraries we had no reason to touch just got upgraded, introducing unexpected breakage. Different developers also had different baseline expectations, so the same &lt;code&gt;vcpkg.json&lt;/code&gt; could produce different results depending on when someone last updated.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Missing dependencies&lt;/strong&gt;: some dependencies were not in vcpkg at all, or not configured the way we needed them. LLVM is the classic example: we need a specific commit, built with specific flags. Tools do not provide their own vcpkg integration; everything is centralized in the vcpkg repository. This forced us into &lt;strong&gt;mixed-source dependency management&lt;/strong&gt; where some deps come from vcpkg and some from custom scripts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;No variant support&lt;/strong&gt;: when we needed &lt;strong&gt;sanitizer builds&lt;/strong&gt; (ASan, MSan, UBSan, TSan), vcpkg had nothing to offer. It knows Debug and Release. Building sanitized variants required custom scripts or custom environment variables to pass the information to the package manager internally.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Manifest vs. classic mode&lt;/strong&gt;: vcpkg offers two modes for specifying dependencies. Some users simply did not like one of the modes, and we had so many complaints that we ended up supporting both. Unlike npm’s local and global modes, vcpkg’s manifest and classic modes do not play well together, so supporting both effectively meant maintaining two separate dependency workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The vcpkg team has done outstanding work on a genuinely difficult problem, and vcpkg handles a lot of it well. Many of these limitations may simply be the best anyone can do given the complexity of the language. Most of the problems listed above do have &lt;strong&gt;external solutions&lt;/strong&gt;: you can set custom triplets, configure environment variables, pass flags manually, and configure build types from outside vcpkg. That is how we handled it for a long time. The issue is that those solutions live &lt;strong&gt;outside the vcpkg workflow&lt;/strong&gt;. We owned that part, and maintaining it was hard. Having vcpkg in the equation meant one more workflow to support, even when the problem was not vcpkg’s fault. The accumulated complexity of maintaining vcpkg alongside our own custom scripts is what eventually became unsustainable.&lt;/p&gt;

&lt;h2 id=&quot;the-problems-no-package-manager-solves&quot;&gt;The Problems No Package Manager Solves&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Dependency acquisition at configure time&lt;/strong&gt;: we once had &lt;code&gt;FetchContent&lt;/code&gt; as an optional alternative to &lt;code&gt;find_package&lt;/code&gt;, so CMake could download dependencies if they were not already present. A team member’s internet went down during a build and CMake failed. The reaction was strong: &lt;strong&gt;nobody should be required to have internet to compile a project they already downloaded&lt;/strong&gt;. The feature was removed entirely. This reinforced that dependency acquisition needed to be a &lt;strong&gt;separate, explicit step&lt;/strong&gt; that completes before the build system even runs.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;IDE integration&lt;/strong&gt;: developers had to manually configure run configurations for CLion, VS Code, or Visual Studio, and those configurations broke whenever the application changed, build options were added, or targets were renamed.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Platform-specific toolchain setup&lt;/strong&gt;: on macOS with Homebrew Clang, the standard tool paths (&lt;code&gt;llvm-ar&lt;/code&gt;, &lt;code&gt;llvm-ranlib&lt;/code&gt;, &lt;code&gt;ld.lld&lt;/code&gt;) are not where the system expects them. On Windows, MSVC requires a Developer Command Prompt with specific environment variables. Setting up either of these correctly from scratch is its own project.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Debugger integration&lt;/strong&gt;: there was no automated way to set up LLDB formatters or GDB pretty printers for Clang and MrDocs symbols. Developers working on the AST had to inspect raw memory layouts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The sheer volume of instructions&lt;/strong&gt;: the build script should not assume a package manager, so you end up documenting both the manual and the package manager path. For each dependency, for each variant (sanitizers, special build types), for each platform. When the package manager path does not work for a given configuration, the developer falls back to the manual path, and that path has to be maintained too.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;five-workflows-and-counting&quot;&gt;Five Workflows and Counting&lt;/h2&gt;

&lt;p&gt;The proliferation was gradual. We started with manual CMake commands, then added FetchContent as an alternative, then adopted vcpkg, then had to support both vcpkg modes, then needed custom CI scripts. By mid-2025, we had accumulated &lt;strong&gt;five different workflows&lt;/strong&gt; for installing dependencies:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Manual CMake&lt;/strong&gt;: the original path, configuring everything by hand&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;FetchContent&lt;/strong&gt;: later removed after the internet incident&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;vcpkg&lt;/strong&gt; (manifest mode): the “official” package manager path&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;vcpkg&lt;/strong&gt; (classic mode): because some users did not like manifest mode&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Custom CI scripts&lt;/strong&gt;: CI uses its own language to describe workflows, and there was no single command that could configure all possible build variants&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#fce4e4&quot;, &quot;primaryBorderColor&quot;: &quot;#e8a0a0&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#e8a0a0&quot;, &quot;secondaryColor&quot;: &quot;#fef3e4&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
flowchart LR
    A[New Developer] --&amp;gt; B{Which workflow?}
    B --&amp;gt; C[Manual CMake]
    B --&amp;gt; D[FetchContent]
    B --&amp;gt; E[vcpkg manifest]
    B --&amp;gt; F[vcpkg classic]
    B --&amp;gt; G[CI scripts]
&lt;/div&gt;

&lt;p&gt;We tried to create a set of instructions that would describe what the user could do for each dependency. For each dependency, we would explain each of the ways to fetch and build it: manual, vcpkg manifest, vcpkg classic. On top of that, for each special variant (sanitizer builds, special build type combinations), there would be yet another set of instructions per dependency per workflow. The documentation grew combinatorially, and people got lost.&lt;/p&gt;

&lt;h1 id=&quot;the-bootstrap-script&quot;&gt;The Bootstrap Script&lt;/h1&gt;

&lt;p&gt;The core principle was &lt;strong&gt;separation of concerns&lt;/strong&gt;: CMake builds the project, but something else manages the dependencies. The bootstrap script fills that gap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Clone and build LLVM (specific commit)
git clone https://github.com/llvm/llvm-project.git
cd llvm-project &amp;amp;&amp;amp; git checkout dc4cef81d47c...
cmake -S llvm -B build -DCMAKE_BUILD_TYPE=Release ...
cmake --build build
cmake --install build
cd ..

# Download and build Duktape
curl -L https://github.com/.../duktape-2.7.0.tar.xz | tar xJ
cmake -S duktape -B duktape/build ...
cmake --build duktape/build
cmake --install duktape/build

# Repeat for libxml2, Lua...
# Then configure MrDocs with all the install paths
cmake -S mrdocs -B mrdocs/build \
  -DLLVM_ROOT=/path/to/llvm/install \
  -Dduktape_ROOT=/path/to/duktape/install \
  -Dlibxml2_ROOT=/path/to/libxml2/install \
  ...
cmake --build mrdocs/build
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;python bootstrap.py
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The script handles everything else:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Probes MSVC&lt;/strong&gt; (Windows only): detects and imports the Visual Studio development environment&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Checks system prerequisites&lt;/strong&gt;: validates that cmake, git, python, and a C/C++ compiler are available&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sets up compilers&lt;/strong&gt;: resolves compiler paths, detects Homebrew Clang on macOS&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Configures build options&lt;/strong&gt;: prompts for build type, sanitizer, and preset name (or accepts defaults in non-interactive mode for CI)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Probes compilers&lt;/strong&gt;: runs a dummy CMake project to extract the compiler ID, version, and capabilities before building anything&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sets up Ninja&lt;/strong&gt;: finds or downloads the Ninja build system&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Installs dependencies&lt;/strong&gt;: fetches and builds Duktape, Lua, libxml2, and LLVM in topological order, each with the correct flags for the chosen configuration&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Generates CMake presets&lt;/strong&gt;: writes a &lt;a href=&quot;https://cmake.org/cmake/help/latest/manual/cmake-presets.7.html&quot;&gt;&lt;code&gt;CMakeUserPresets.json&lt;/code&gt;&lt;/a&gt; with all dependency paths, compiler configuration, and IDE settings&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Generates IDE configurations&lt;/strong&gt;: run/debug configs for CLion, VS Code, and Visual Studio, plus debugger pretty printers&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Builds MrDocs&lt;/strong&gt;: configures, builds, and optionally installs MrDocs using the generated presets&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Runs tests&lt;/strong&gt;: executes the test suite in parallel&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
sequenceDiagram
    participant U as Developer
    participant B as bootstrap.py
    participant S as System
    participant D as Dependencies
    participant C as CMake
    participant I as IDE

    U-&amp;gt;&amp;gt;B: python bootstrap.py
    B-&amp;gt;&amp;gt;S: Probe MSVC environment (Windows)
    B-&amp;gt;&amp;gt;S: Check prerequisites (cmake, git, compiler)
    B-&amp;gt;&amp;gt;S: Set up compilers and Ninja
    B-&amp;gt;&amp;gt;U: Prompt for build type, sanitizer, preset
    B-&amp;gt;&amp;gt;S: Probe compiler ID and version
    B-&amp;gt;&amp;gt;D: Fetch and build dependencies
    B-&amp;gt;&amp;gt;C: Generate CMakeUserPresets.json
    B-&amp;gt;&amp;gt;I: Generate IDE and debugger configs
    B-&amp;gt;&amp;gt;C: Build and install MrDocs
    B-&amp;gt;&amp;gt;C: Run tests
&lt;/div&gt;

&lt;h2 id=&quot;how-it-evolved&quot;&gt;How It Evolved&lt;/h2&gt;

&lt;p&gt;The first commit landed on &lt;strong&gt;July 16, 2025&lt;/strong&gt;. Over the next eight months, the script went through seven distinct phases of development across roughly 57 commits.&lt;/p&gt;

&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
timeline
  title bootstrap.py Evolution
  Jul 2025 : Foundation and UX
  Aug 2025 : IDE configs, sanitizers, and Windows
  Sep 2025 : Developer tooling and LLDB
  Dec 2025 : Modularization into package
  Mar 2026 : CI integration
&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;first week&lt;/strong&gt; (July 16–19) was about getting the one-liner to work at all: the core workflow, colored prompts, parallel test execution, and the first installation docs.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Phase 1: Foundation (July 16–19, 2025)&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/521cc704&quot;&gt;521cc704&lt;/a&gt; build: bootstrap script&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/e32bb36e&quot;&gt;e32bb36e&lt;/a&gt; build: bootstrap uses another path for mrdocs source when not already called from source directory&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/e7e3ef51&quot;&gt;e7e3ef51&lt;/a&gt; build: bootstrap build options list valid types&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/75c28e45&quot;&gt;75c28e45&lt;/a&gt; build: bootstrap prompts use colors&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c156a05f&quot;&gt;c156a05f&lt;/a&gt; build: bootstrap removes redundant flags&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c14f071b&quot;&gt;c14f071b&lt;/a&gt; build: bootstrap runs tests in parallel&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1a9de28c&quot;&gt;1a9de28c&lt;/a&gt; docs: one-liner installation instructions&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/76611f93&quot;&gt;76611f93&lt;/a&gt; build: bootstrap paths use cmake relative path shortcuts&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;The &lt;strong&gt;second and third weeks&lt;/strong&gt; turned the script into a development environment setup tool by generating IDE run configurations for CLion, VS Code, and Visual Studio. By the end of July, the script also supported &lt;strong&gt;custom compilers&lt;/strong&gt;, &lt;strong&gt;sanitizer builds&lt;/strong&gt;, and &lt;strong&gt;Homebrew Clang&lt;/strong&gt; on macOS.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Phase 2: IDE Integration (July 22–28, 2025)&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/502cfbd8&quot;&gt;502cfbd8&lt;/a&gt; build: bootstrap generates debug configurations&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b546c260&quot;&gt;b546c260&lt;/a&gt; build: bootstrap dependency refresh run configurations&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/83525d38&quot;&gt;83525d38&lt;/a&gt; build: bootstrap documentation run configurations&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2cfdd19e&quot;&gt;2cfdd19e&lt;/a&gt; build: bootstrap website run configurations&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ca4b04d3&quot;&gt;ca4b04d3&lt;/a&gt; build: bootstrap MrDocs self-reference run configuration&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b5f53bd9&quot;&gt;b5f53bd9&lt;/a&gt; build: bootstrap XML lint run configurations&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;details&gt;
  &lt;summary&gt;Phase 3: Build Variants and Sanitizers (July 29–August 1, 2025)&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0a751acd&quot;&gt;0a751acd&lt;/a&gt; build: bootstrap supports custom compilers&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ff62919f&quot;&gt;ff62919f&lt;/a&gt; build: LLVM runtimes come from presets&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2b757fac&quot;&gt;2b757fac&lt;/a&gt; build: bootstrap debug presets with release dependencies&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0d179e84&quot;&gt;0d179e84&lt;/a&gt; build: installation workflow uses Ninja for all projects&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3d8fa853&quot;&gt;3d8fa853&lt;/a&gt; build: installation workflow supports sanitizers&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/26cec9d8&quot;&gt;26cec9d8&lt;/a&gt; build: installation workflow supports homebrew clang&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;&lt;strong&gt;August&lt;/strong&gt; was the cross-platform month. Windows support required probing &lt;code&gt;vcvarsall.bat&lt;/code&gt;, handling Visual Studio tool paths, and ensuring git symlinks worked. Paths were made relocatable so &lt;code&gt;CMakeUserPresets.json&lt;/code&gt; files could be shared across machines.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Phase 4: Cross-Platform Polish (August 2025)&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fc2aa2d6&quot;&gt;fc2aa2d6&lt;/a&gt; build: external include directories are relocatable&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/21c206b9&quot;&gt;21c206b9&lt;/a&gt; build: bootstrap vscode run configurations&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d2f9c204&quot;&gt;d2f9c204&lt;/a&gt; build: Visual Studio run configurations&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0ca523e7&quot;&gt;0ca523e7&lt;/a&gt; build: bootstrap supports default Visual Studio tool paths on Windows&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4b79ef41&quot;&gt;4b79ef41&lt;/a&gt; build(bootstrap): probe vcvarsall environment&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4d705c96&quot;&gt;4d705c96&lt;/a&gt; build(bootstrap): ensure git symlinks&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/524e7923&quot;&gt;524e7923&lt;/a&gt; build(bootstrap): visual studio run configurations and tasks&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/94a5b799&quot;&gt;94a5b799&lt;/a&gt; build(bootstrap): remove dependency build directories after installation&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;&lt;strong&gt;September and October&lt;/strong&gt; added developer tooling: LLDB data formatters for Clang and MrDocs symbols, pretty printer configurations, libcxx hardening mode, and the style guide documentation.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Phase 5: Developer Tooling (September–October 2025)&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fc98559a&quot;&gt;fc98559a&lt;/a&gt; build(bootstrap): include pretty printers configuration&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/069bd8f4&quot;&gt;069bd8f4&lt;/a&gt; feat(lldb): LLDB data formatters&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1b39fdd7&quot;&gt;1b39fdd7&lt;/a&gt; fix(lldb): clang ast formatters&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/988e9ebc&quot;&gt;988e9ebc&lt;/a&gt; build(bootstrap): config info for docs&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f48bbd2f&quot;&gt;f48bbd2f&lt;/a&gt; build: bootstrap enables libcxx hardening mode&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5e16e3fa&quot;&gt;5e16e3fa&lt;/a&gt; Fix support for clang cl-mode driver (#1069)&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;By &lt;strong&gt;December&lt;/strong&gt;, the monolithic 2,700-line &lt;code&gt;bootstrap.py&lt;/code&gt; was refactored into a proper Python package under &lt;code&gt;util/bootstrap/&lt;/code&gt; with 20+ modules organized by concern: &lt;code&gt;core/&lt;/code&gt; (platform detection, options, UI), &lt;code&gt;configs/&lt;/code&gt; (IDE run configurations), &lt;code&gt;presets/&lt;/code&gt; (CMake preset generation), &lt;code&gt;recipes/&lt;/code&gt; (dependency building), and &lt;code&gt;tools/&lt;/code&gt; (compiler detection). The package also includes its own &lt;strong&gt;test suite&lt;/strong&gt;, which means one person changing the bootstrap script for their platform is not going to break it for someone else on a different platform.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Phase 6: Modularization (November–December 2025)&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0d4a8459&quot;&gt;0d4a8459&lt;/a&gt; build(bootstrap): modularize recipes&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7ba4699b&quot;&gt;7ba4699b&lt;/a&gt; build(bootstrap): transition banner&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/99d61207&quot;&gt;99d61207&lt;/a&gt; build(bootstrap): handle empty input and “none” in prompt retry&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/e3b3fd02&quot;&gt;e3b3fd02&lt;/a&gt; build(bootstrap): convert script into package structure&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;p&gt;In &lt;strong&gt;March 2026&lt;/strong&gt;, the bootstrap script replaced the custom CI dependency scripts. This was a major milestone: users, developers, and CI now all use the same tool. CI was simplified significantly because the dependency steps are no longer custom shell commands maintained separately. And because CI runs the bootstrap on every push, the script itself is continuously tested across all platforms. If the bootstrap breaks on any platform, CI catches it immediately.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Phase 7: CI Integration (2026)&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6cee4af2&quot;&gt;6cee4af2&lt;/a&gt; use system libs by default (#1077)&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9b4fafbf&quot;&gt;9b4fafbf&lt;/a&gt; ci: dependency steps use bootstrap script&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;key-design-decisions&quot;&gt;Key Design Decisions&lt;/h2&gt;

&lt;p&gt;Several technical challenges required careful design. Here are the most interesting ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flag propagation.&lt;/strong&gt; Not all flags should reach all dependencies, and the propagation rules vary per flag type and per dependency. Some sanitizers require all dependencies to be instrumented, while others only need compile-time checks. Build type does not always propagate (libxml2 is always built as Release). Compiler paths always propagate. The script evaluates each dependency individually and checks ABI compatibility before deciding whether to honor or coerce the build type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Windows ABI handling.&lt;/strong&gt; On MSVC, Debug and Release are &lt;a href=&quot;https://learn.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-170&quot;&gt;ABI-incompatible&lt;/a&gt; at the CRT level. When the script detects a mismatch, it coerces the dependency build to &lt;strong&gt;“OptimizedDebug”&lt;/strong&gt; (Debug ABI with &lt;code&gt;/O2&lt;/code&gt; optimization). This is different from &lt;code&gt;RelWithDebInfo&lt;/code&gt;, which uses the Release ABI with debug symbols and will not link with a Debug MrDocs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-platform compiler detection.&lt;/strong&gt; On Linux, compiler detection is straightforward. On macOS with Homebrew Clang, the script detects and injects the correct &lt;code&gt;llvm-ar&lt;/code&gt;, &lt;code&gt;llvm-ranlib&lt;/code&gt;, &lt;code&gt;ld.lld&lt;/code&gt;, and &lt;code&gt;libc++&lt;/code&gt; paths, which are not on the default search path. On Windows, the script locates Visual Studio via &lt;a href=&quot;https://github.com/microsoft/vswhere&quot;&gt;&lt;code&gt;vswhere.exe&lt;/code&gt;&lt;/a&gt;, runs &lt;a href=&quot;https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-170&quot;&gt;&lt;code&gt;vcvarsall.bat&lt;/code&gt;&lt;/a&gt; with debug output, and parses the environment variables into Python for all subsequent CMake calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CMake preset generation.&lt;/strong&gt; After building dependencies, the script generates a &lt;a href=&quot;https://cmake.org/cmake/help/latest/manual/cmake-presets.7.html&quot;&gt;&lt;code&gt;CMakeUserPresets.json&lt;/code&gt;&lt;/a&gt; with all dependency paths, compiler configuration, and platform conditions. Paths are made relocatable by replacing absolute prefixes with CMake variables (&lt;code&gt;${sourceDir}&lt;/code&gt;, &lt;code&gt;${sourceParentDir}&lt;/code&gt;, &lt;code&gt;$env{HOME}&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IDE run configurations.&lt;/strong&gt; The script generates ready-to-use configurations for CLion, VS Code, and Visual Studio: building and debugging MrDocs, running tests, generating documentation, refreshing dependencies, generating config info and YAML schemas, validating XML output, running MrDocs on Boost libraries (auto-discovered), and reformatting source files. CMake custom commands can create build targets, but you cannot debug them from the IDE.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recipe system.&lt;/strong&gt; Dependencies are defined as JSON recipe files with source URLs, build steps, and dependency relationships. The bootstrap topologically sorts them and builds them in order. Each recipe tracks its state with a &lt;strong&gt;stamp file&lt;/strong&gt; (recipe version, git ref, platform, build parameters). If any parameter changes, the dependency is rebuilt. The stamp system also generates &lt;strong&gt;CI cache keys&lt;/strong&gt; like &lt;code&gt;llvm-abc1234-release-ubuntu-24.04-clang-19-ASan&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refresh command.&lt;/strong&gt; Because of the stamp system, a developer can run the bootstrap with &lt;code&gt;--refresh-all&lt;/code&gt; at any time. The script re-evaluates all stamps and rebuilds only the dependencies that are out of date with whatever configurations are needed. This makes updating dependencies after a configuration change (new sanitizer, different compiler, updated LLVM commit) a single command rather than a manual process of figuring out which dependencies need rebuilding.&lt;/p&gt;

&lt;h1 id=&quot;what-we-learned&quot;&gt;What We Learned&lt;/h1&gt;

&lt;p&gt;Users, developers, and CI now all use the same tool. Users get a one-liner installation. Developers get IDE run configurations and debugger integration. CI gets non-interactive mode with sanitizer support. The exact same code path that builds dependencies on a developer’s laptop now builds dependencies in CI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separation of concerns.&lt;/strong&gt; When your project’s requirements are complex enough (multiple build types, sanitizer variants, cross-platform quirks, heavy dependencies like LLVM), a custom script that owns the entire dependency lifecycle is simpler than trying to make a general-purpose tool handle every edge case.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Existing tools solve the general case well. Our specific combination of requirements needed something tailored.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;C++ has no unified build workflow.&lt;/strong&gt; Every platform has its own conventions for finding compilers, setting up environments, and linking libraries. Just finding and setting up MSVC from a script is a project in itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New contributors can start working immediately.&lt;/strong&gt; Before the bootstrap, getting a working build could take days. Now it takes a single command, and the IDE configurations are included.&lt;/p&gt;

&lt;p&gt;We still have small glitches as new compilers and platforms appear, but each fix is a localized change in one module rather than a cross-cutting update to five independent workflows.&lt;/p&gt;

&lt;p&gt;The complete bootstrap package is available in the &lt;a href=&quot;https://github.com/cppalliance/mrdocs/tree/develop/util/bootstrap&quot;&gt;MrDocs repository&lt;/a&gt;.&lt;/p&gt;</content><author><name></name></author><category term="alan" /><summary type="html">When new developers joined the MrDocs team, we expected the usual ramp-up: learning the codebase, understanding the architecture, and getting comfortable with the review process. What we did not expect was that building and testing the project would be the hardest part. People dedicated to the project full-time spent weeks just trying to get a working build. Even when they succeeded, each person ended up with their own set of workarounds: a custom script here, a patched flag there, an undocumented environment variable somewhere else. One unrelated commit from someone else could silently break another developer’s local setup. And even after all of that, they didn’t know how to run the commands to test the project. As the complexity grew, we naturally reached for a package manager. We adopted vcpkg, but over time we discovered that our problem was too complex for what any package manager is designed to handle. The build type combinations, the sanitizer propagation, the cross-platform toolchain differences, and the IDE configurations: these are workflow problems that kept accumulating. That realization, combined with an onboarding crisis where new contributors could not build the project at all, led us to write our own bootstrap script. The idea was not unfamiliar: at the C++ Alliance, we work closely with the Boost libraries, and Boost has shipped a bootstrap script for years. We knew the pattern worked. We just needed to apply it to our own dependency problem. This post explains why robust C++ workflows are fundamentally difficult, not only for dependency management but also for supporting multiple platforms, compilers, and testing configurations. It describes what we learned from our experience with vcpkg and how a bootstrap script solved the problem for MrDocs. Why Dependency Management Is Hard A Combinatorial Explosion Why C++ Makes It Worse What Went Wrong for MrDocs Where vcpkg Fell Short The Problems No Package Manager Solves Five Workflows and Counting The Bootstrap Script How It Evolved Key Design Decisions What We Learned Why Dependency Management Is Hard A Combinatorial Explosion Suppose your project depends on Package A &amp;gt;=1.0 and Package B &amp;gt;=2.0, but all options where A &amp;gt;=1.0 require B &amp;lt;=1.5. You are stuck. With hundreds of packages, each with multiple versions and possibly conflicting or conditional dependencies, the problem explodes combinatorially. This is not hyperbole. Package dependency resolution is NP-complete. It reduces directly to the Boolean satisfiability problem (SAT). Each package version is a boolean variable, each dependency constraint is a clause, and finding an installable set is equivalent to finding a satisfying assignment. Real-world tools handle this with heuristics (like APT and pip) or outright SAT solvers (like libsolv, used by DNF and Zypper). In the worst case, finding a consistent set of dependency versions requires exponential time. Verifying one is polynomial, but discovering it may not be. How other ecosystems hide this complexity Most users never notice this because package managers use tricks. When npm cannot satisfy all constraints, it installs multiple versions of the same package in nested node_modules directories, so both versions get bundled into the final application. Cargo does something similar: when two crates require SemVer-incompatible versions of the same dependency, it includes both in the build. Most users are not aware this is happening, and would probably not be happy about it if they were: bundling two versions of the same library increases binary size, can cause subtle bugs when types from different versions interact, and makes the dependency graph harder to reason about. In C++, the trick is not even available. You cannot link two versions of the same library into a single binary. When the constraints are unsatisfiable, there is no quiet fallback. You get a build error. Why C++ Makes It Worse No standard package format: unlike npm, pip, or Cargo, C++ has no universal package format. Every dependency must be compiled with compatible settings, and pre-built binaries are the exception rather than the rule. ABI compatibility: different compilers, compiler versions, and even compiler flags can produce incompatible binaries. You cannot just link any two object files together. API compatibility: header-only and compiled libraries have different concerns. Template instantiation happens at the consumer’s compile time, so a header-only library can break when the consumer’s compiler or standard library version changes, even if the library itself has not changed. Categorical options: choices like shared/static linking, exceptions on/off, and RTTI on/off need to be consistent across the entire dependency chain. If one library is built with exceptions disabled and another expects them, you get subtle runtime failures or linker errors. Viral flags: some flags must propagate to all dependencies, some must not, and some are optional per dependency. Build type is a good example of the nuance. You might want MrDocs in Debug, but building LLVM in Debug makes it too slow to use. Building LLVM in Release makes it too hard to debug when the bug is in LLVM. So you end up with combinations like “MrDocs in Debug, LLVM in Debug with optimization, everything else in Release.” The propagation decision can vary per dependency and per situation. Viral macros: preprocessor macros can create different binary versions of the same library. If spdlog depends on fmt with a certain macro configuration, every other dependency on fmt in the hierarchy has to use the same macro. A macro used in fmt might affect the macros available in peer dependencies that define fmt formatters. This creates constraints that propagate transitively through the dependency graph, and no package manager tracks macro configurations. Sanitizer propagation: sanitizers deserve their own mention because they are not all equally viral. UndefinedBehaviorSanitizer is the lightest: it relies on compile-time checks and can share the same dependency builds as the unsanitized configuration. AddressSanitizer, MemorySanitizer, and ThreadSanitizer each need their own separately built LLVM with instrumented dependencies. ASan and MSan go further: they also require an instrumented libc++ built as an LLVM runtime. MSan is the extreme case. It reports false positives on any uninstrumented code, so the entire chain has to be instrumented: first build the C++ standard library with MSan, then build all dependencies against that instrumented standard library, then build MrDocs itself. That is three layers of builds with a single flag threading through all of them. No package manager models these propagation levels. Build type incompatibility: on MSVC, Debug and Release are ABI-incompatible at the CRT level. This means you cannot just build your dependencies in Release and your project in Debug for a faster development cycle. You need all of them on the same side of the Debug/Release boundary. A Debug build with optimization (“OptimizedDebug”) is structurally different from a Release build with debug symbols (“RelWithDebInfo”). The first uses the Debug CRT with /O2; the second uses the Release CRT with debug info. Mixing them causes linker errors. This forces you into configurations that no standard build type represents. Platform-specific toolchain setup: each platform has its own way of locating and configuring compilers. On Linux, GCC and Clang are on PATH. On macOS, Homebrew Clang installs toolchain components (llvm-ar, llvm-ranlib, ld.lld) and its standard library (libc++) in non-standard locations that differ from AppleClang’s. The headers and libraries are not on the default search path, so you have to pass their locations explicitly through compiler and linker flags for everything you compile. On Windows, MSVC does not live on PATH at all: it requires environment variables set by vcvarsall.bat, and locating the correct Visual Studio installation requires vswhere.exe. None of this is handled by package managers. Compiler and standard library combinations: on Linux, Clang uses whatever libstdc++ is installed on the system rather than shipping its own. Ubuntu 24.04 ships GCC 13, but MrDocs needs GCC 14 features (like &amp;lt;print&amp;gt;). So a developer using Clang 20 on a fresh Ubuntu machine gets build errors from the standard library, not from their own code. Testing every Clang version with every GCC’s libstdc++ is infeasible, but specific combinations matter, and the mismatch is not obvious to the developer when it happens. Platform explosion: Windows/Linux/macOS multiplied by Debug/Release/OptimizedDebug, GCC/Clang/MSVC/AppleClang, shared/static, and sanitizer variants creates a combinatorial explosion of configurations that all need to be tested. Each platform also has its own quirks: git symlinks behave differently on Windows, Ninja availability varies, and even the way you specify compiler flags differs between MSVC and GCC/Clang. Conditional dependencies: in C++, build options frequently add or remove entire dependencies. An image processing library might support PNG, JPEG, and WebP, each requiring its own codec library. Enabling or disabling a format changes the dependency graph. Build scripts also commonly look for host dependencies (system libraries for talking to the OS, GPU, or network) that you are not expected to build yourself but that must be present on the machine. The dependency graph is not static; it depends on the configuration. Closed-source dependencies: all of the problems above assume you have the source code and can rebuild with the correct flags. Sometimes you do not. When a dependency is distributed only as a pre-built binary, there is no way to adjust the ABI, propagate sanitizer flags, or change the build type. If it was compiled with incompatible settings, there is nothing you can do about it. It becomes a hard constraint on the entire system. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% mindmap root((C++ Dependencies)) No Standard Format Built from source Closed-source binaries Compatibility ABI API / Templates Build Type / CRT Propagation Viral flags Viral macros Sanitizers Categorical options Dependencies Conditional on build options Host / system libraries Closed-source binaries Platform Toolchain setup Compiler + stdlib combos Combinatorial explosion In C++, the general case involves so many dimensions that no existing tool handles all of them well. What about CPS? The Common Package Specification (CPS) is an interesting effort to standardize how C++ packages are consumed. A .cps file describes everything a build system needs to find and link against an already-built package: include paths, library paths, compiler flags. This is valuable, but it operates at the point of consumption, where we have already made all the decisions about platform, compiler, build type, and sanitizers. It assumes the dependency has already been built in a compatible way. It does not describe how to build the dependency with the correct flags in the first place. For example, if we need AddressSanitizer, all dependencies must be built with ASan instrumentation. A CPS file tells us how to consume a package that was built with ASan, but it does not know how to rebuild that package with ASan if it was not. The problems described above are all about making those upstream decisions correctly, which happens before CPS enters the picture. What Went Wrong for MrDocs MrDocs depends on LLVM, Duktape, Lua, and libxml2 (and previously also fmt). Over time, three categories of problems accumulated. Where vcpkg Fell Short For over a year, we used vcpkg to manage these dependencies. MrDocs is a tool, not a library, so we only needed vcpkg for acquiring our own dependencies rather than for making ourselves easy to consume downstream. It worked at first, but the complexity of our workflows gradually outgrew what vcpkg was designed to handle: Build types: MrDocs developers frequently need a Debug build with optimization enabled because the codebase is large enough that an unoptimized debug build is painfully slow. On MSVC, Debug and Release are ABI-incompatible, so a “Debug with optimization” configuration does not fit neatly into vcpkg’s Debug/Release binary model. Patches and dual paths: vcpkg applies patches to libraries that do not follow CMake conventions. This meant we had to support two ways to find the same library: the vcpkg-patched version and the upstream version. When libraries do follow CMake conventions, we do not need vcpkg as much. But when they do not, the patches make vcpkg less useful rather than more. Contributors kept opening PRs proposing yet another way to locate a dependency. In a build script, every new path is expensive to test. Rigid baseline: vcpkg’s baseline model pins all libraries to a single snapshot. We are tightly coupled to a specific LLVM commit, so we could not use vcpkg for LLVM from the start. That alone meant vcpkg could only manage a subset of our dependencies. On top of that, when fmt bumped a major version and broke downstream consumers, it showed that the baseline approach is too rigid for projects that use a few unrelated libraries. Sometimes the entire baseline would be updated and libraries we had no reason to touch just got upgraded, introducing unexpected breakage. Different developers also had different baseline expectations, so the same vcpkg.json could produce different results depending on when someone last updated. Missing dependencies: some dependencies were not in vcpkg at all, or not configured the way we needed them. LLVM is the classic example: we need a specific commit, built with specific flags. Tools do not provide their own vcpkg integration; everything is centralized in the vcpkg repository. This forced us into mixed-source dependency management where some deps come from vcpkg and some from custom scripts. No variant support: when we needed sanitizer builds (ASan, MSan, UBSan, TSan), vcpkg had nothing to offer. It knows Debug and Release. Building sanitized variants required custom scripts or custom environment variables to pass the information to the package manager internally. Manifest vs. classic mode: vcpkg offers two modes for specifying dependencies. Some users simply did not like one of the modes, and we had so many complaints that we ended up supporting both. Unlike npm’s local and global modes, vcpkg’s manifest and classic modes do not play well together, so supporting both effectively meant maintaining two separate dependency workflows. The vcpkg team has done outstanding work on a genuinely difficult problem, and vcpkg handles a lot of it well. Many of these limitations may simply be the best anyone can do given the complexity of the language. Most of the problems listed above do have external solutions: you can set custom triplets, configure environment variables, pass flags manually, and configure build types from outside vcpkg. That is how we handled it for a long time. The issue is that those solutions live outside the vcpkg workflow. We owned that part, and maintaining it was hard. Having vcpkg in the equation meant one more workflow to support, even when the problem was not vcpkg’s fault. The accumulated complexity of maintaining vcpkg alongside our own custom scripts is what eventually became unsustainable. The Problems No Package Manager Solves Dependency acquisition at configure time: we once had FetchContent as an optional alternative to find_package, so CMake could download dependencies if they were not already present. A team member’s internet went down during a build and CMake failed. The reaction was strong: nobody should be required to have internet to compile a project they already downloaded. The feature was removed entirely. This reinforced that dependency acquisition needed to be a separate, explicit step that completes before the build system even runs. IDE integration: developers had to manually configure run configurations for CLion, VS Code, or Visual Studio, and those configurations broke whenever the application changed, build options were added, or targets were renamed. Platform-specific toolchain setup: on macOS with Homebrew Clang, the standard tool paths (llvm-ar, llvm-ranlib, ld.lld) are not where the system expects them. On Windows, MSVC requires a Developer Command Prompt with specific environment variables. Setting up either of these correctly from scratch is its own project. Debugger integration: there was no automated way to set up LLDB formatters or GDB pretty printers for Clang and MrDocs symbols. Developers working on the AST had to inspect raw memory layouts. The sheer volume of instructions: the build script should not assume a package manager, so you end up documenting both the manual and the package manager path. For each dependency, for each variant (sanitizers, special build types), for each platform. When the package manager path does not work for a given configuration, the developer falls back to the manual path, and that path has to be maintained too. Five Workflows and Counting The proliferation was gradual. We started with manual CMake commands, then added FetchContent as an alternative, then adopted vcpkg, then had to support both vcpkg modes, then needed custom CI scripts. By mid-2025, we had accumulated five different workflows for installing dependencies: Manual CMake: the original path, configuring everything by hand FetchContent: later removed after the internet incident vcpkg (manifest mode): the “official” package manager path vcpkg (classic mode): because some users did not like manifest mode Custom CI scripts: CI uses its own language to describe workflows, and there was no single command that could configure all possible build variants %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#fce4e4&quot;, &quot;primaryBorderColor&quot;: &quot;#e8a0a0&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#e8a0a0&quot;, &quot;secondaryColor&quot;: &quot;#fef3e4&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% flowchart LR A[New Developer] --&amp;gt; B{Which workflow?} B --&amp;gt; C[Manual CMake] B --&amp;gt; D[FetchContent] B --&amp;gt; E[vcpkg manifest] B --&amp;gt; F[vcpkg classic] B --&amp;gt; G[CI scripts] We tried to create a set of instructions that would describe what the user could do for each dependency. For each dependency, we would explain each of the ways to fetch and build it: manual, vcpkg manifest, vcpkg classic. On top of that, for each special variant (sanitizer builds, special build type combinations), there would be yet another set of instructions per dependency per workflow. The documentation grew combinatorially, and people got lost. The Bootstrap Script The core principle was separation of concerns: CMake builds the project, but something else manages the dependencies. The bootstrap script fills that gap. Before: # Clone and build LLVM (specific commit) git clone https://github.com/llvm/llvm-project.git cd llvm-project &amp;amp;&amp;amp; git checkout dc4cef81d47c... cmake -S llvm -B build -DCMAKE_BUILD_TYPE=Release ... cmake --build build cmake --install build cd .. # Download and build Duktape curl -L https://github.com/.../duktape-2.7.0.tar.xz | tar xJ cmake -S duktape -B duktape/build ... cmake --build duktape/build cmake --install duktape/build # Repeat for libxml2, Lua... # Then configure MrDocs with all the install paths cmake -S mrdocs -B mrdocs/build \ -DLLVM_ROOT=/path/to/llvm/install \ -Dduktape_ROOT=/path/to/duktape/install \ -Dlibxml2_ROOT=/path/to/libxml2/install \ ... cmake --build mrdocs/build After: python bootstrap.py The script handles everything else: Probes MSVC (Windows only): detects and imports the Visual Studio development environment Checks system prerequisites: validates that cmake, git, python, and a C/C++ compiler are available Sets up compilers: resolves compiler paths, detects Homebrew Clang on macOS Configures build options: prompts for build type, sanitizer, and preset name (or accepts defaults in non-interactive mode for CI) Probes compilers: runs a dummy CMake project to extract the compiler ID, version, and capabilities before building anything Sets up Ninja: finds or downloads the Ninja build system Installs dependencies: fetches and builds Duktape, Lua, libxml2, and LLVM in topological order, each with the correct flags for the chosen configuration Generates CMake presets: writes a CMakeUserPresets.json with all dependency paths, compiler configuration, and IDE settings Generates IDE configurations: run/debug configs for CLion, VS Code, and Visual Studio, plus debugger pretty printers Builds MrDocs: configures, builds, and optionally installs MrDocs using the generated presets Runs tests: executes the test suite in parallel %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% sequenceDiagram participant U as Developer participant B as bootstrap.py participant S as System participant D as Dependencies participant C as CMake participant I as IDE U-&amp;gt;&amp;gt;B: python bootstrap.py B-&amp;gt;&amp;gt;S: Probe MSVC environment (Windows) B-&amp;gt;&amp;gt;S: Check prerequisites (cmake, git, compiler) B-&amp;gt;&amp;gt;S: Set up compilers and Ninja B-&amp;gt;&amp;gt;U: Prompt for build type, sanitizer, preset B-&amp;gt;&amp;gt;S: Probe compiler ID and version B-&amp;gt;&amp;gt;D: Fetch and build dependencies B-&amp;gt;&amp;gt;C: Generate CMakeUserPresets.json B-&amp;gt;&amp;gt;I: Generate IDE and debugger configs B-&amp;gt;&amp;gt;C: Build and install MrDocs B-&amp;gt;&amp;gt;C: Run tests How It Evolved The first commit landed on July 16, 2025. Over the next eight months, the script went through seven distinct phases of development across roughly 57 commits. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% timeline title bootstrap.py Evolution Jul 2025 : Foundation and UX Aug 2025 : IDE configs, sanitizers, and Windows Sep 2025 : Developer tooling and LLDB Dec 2025 : Modularization into package Mar 2026 : CI integration The first week (July 16–19) was about getting the one-liner to work at all: the core workflow, colored prompts, parallel test execution, and the first installation docs. Phase 1: Foundation (July 16–19, 2025) 521cc704 build: bootstrap script e32bb36e build: bootstrap uses another path for mrdocs source when not already called from source directory e7e3ef51 build: bootstrap build options list valid types 75c28e45 build: bootstrap prompts use colors c156a05f build: bootstrap removes redundant flags c14f071b build: bootstrap runs tests in parallel 1a9de28c docs: one-liner installation instructions 76611f93 build: bootstrap paths use cmake relative path shortcuts The second and third weeks turned the script into a development environment setup tool by generating IDE run configurations for CLion, VS Code, and Visual Studio. By the end of July, the script also supported custom compilers, sanitizer builds, and Homebrew Clang on macOS. Phase 2: IDE Integration (July 22–28, 2025) 502cfbd8 build: bootstrap generates debug configurations b546c260 build: bootstrap dependency refresh run configurations 83525d38 build: bootstrap documentation run configurations 2cfdd19e build: bootstrap website run configurations ca4b04d3 build: bootstrap MrDocs self-reference run configuration b5f53bd9 build: bootstrap XML lint run configurations Phase 3: Build Variants and Sanitizers (July 29–August 1, 2025) 0a751acd build: bootstrap supports custom compilers ff62919f build: LLVM runtimes come from presets 2b757fac build: bootstrap debug presets with release dependencies 0d179e84 build: installation workflow uses Ninja for all projects 3d8fa853 build: installation workflow supports sanitizers 26cec9d8 build: installation workflow supports homebrew clang August was the cross-platform month. Windows support required probing vcvarsall.bat, handling Visual Studio tool paths, and ensuring git symlinks worked. Paths were made relocatable so CMakeUserPresets.json files could be shared across machines. Phase 4: Cross-Platform Polish (August 2025) fc2aa2d6 build: external include directories are relocatable 21c206b9 build: bootstrap vscode run configurations d2f9c204 build: Visual Studio run configurations 0ca523e7 build: bootstrap supports default Visual Studio tool paths on Windows 4b79ef41 build(bootstrap): probe vcvarsall environment 4d705c96 build(bootstrap): ensure git symlinks 524e7923 build(bootstrap): visual studio run configurations and tasks 94a5b799 build(bootstrap): remove dependency build directories after installation September and October added developer tooling: LLDB data formatters for Clang and MrDocs symbols, pretty printer configurations, libcxx hardening mode, and the style guide documentation. Phase 5: Developer Tooling (September–October 2025) fc98559a build(bootstrap): include pretty printers configuration 069bd8f4 feat(lldb): LLDB data formatters 1b39fdd7 fix(lldb): clang ast formatters 988e9ebc build(bootstrap): config info for docs f48bbd2f build: bootstrap enables libcxx hardening mode 5e16e3fa Fix support for clang cl-mode driver (#1069) By December, the monolithic 2,700-line bootstrap.py was refactored into a proper Python package under util/bootstrap/ with 20+ modules organized by concern: core/ (platform detection, options, UI), configs/ (IDE run configurations), presets/ (CMake preset generation), recipes/ (dependency building), and tools/ (compiler detection). The package also includes its own test suite, which means one person changing the bootstrap script for their platform is not going to break it for someone else on a different platform. Phase 6: Modularization (November–December 2025) 0d4a8459 build(bootstrap): modularize recipes 7ba4699b build(bootstrap): transition banner 99d61207 build(bootstrap): handle empty input and “none” in prompt retry e3b3fd02 build(bootstrap): convert script into package structure In March 2026, the bootstrap script replaced the custom CI dependency scripts. This was a major milestone: users, developers, and CI now all use the same tool. CI was simplified significantly because the dependency steps are no longer custom shell commands maintained separately. And because CI runs the bootstrap on every push, the script itself is continuously tested across all platforms. If the bootstrap breaks on any platform, CI catches it immediately. Phase 7: CI Integration (2026) 6cee4af2 use system libs by default (#1077) 9b4fafbf ci: dependency steps use bootstrap script Key Design Decisions Several technical challenges required careful design. Here are the most interesting ones. Flag propagation. Not all flags should reach all dependencies, and the propagation rules vary per flag type and per dependency. Some sanitizers require all dependencies to be instrumented, while others only need compile-time checks. Build type does not always propagate (libxml2 is always built as Release). Compiler paths always propagate. The script evaluates each dependency individually and checks ABI compatibility before deciding whether to honor or coerce the build type. Windows ABI handling. On MSVC, Debug and Release are ABI-incompatible at the CRT level. When the script detects a mismatch, it coerces the dependency build to “OptimizedDebug” (Debug ABI with /O2 optimization). This is different from RelWithDebInfo, which uses the Release ABI with debug symbols and will not link with a Debug MrDocs. Cross-platform compiler detection. On Linux, compiler detection is straightforward. On macOS with Homebrew Clang, the script detects and injects the correct llvm-ar, llvm-ranlib, ld.lld, and libc++ paths, which are not on the default search path. On Windows, the script locates Visual Studio via vswhere.exe, runs vcvarsall.bat with debug output, and parses the environment variables into Python for all subsequent CMake calls. CMake preset generation. After building dependencies, the script generates a CMakeUserPresets.json with all dependency paths, compiler configuration, and platform conditions. Paths are made relocatable by replacing absolute prefixes with CMake variables (${sourceDir}, ${sourceParentDir}, $env{HOME}). IDE run configurations. The script generates ready-to-use configurations for CLion, VS Code, and Visual Studio: building and debugging MrDocs, running tests, generating documentation, refreshing dependencies, generating config info and YAML schemas, validating XML output, running MrDocs on Boost libraries (auto-discovered), and reformatting source files. CMake custom commands can create build targets, but you cannot debug them from the IDE. Recipe system. Dependencies are defined as JSON recipe files with source URLs, build steps, and dependency relationships. The bootstrap topologically sorts them and builds them in order. Each recipe tracks its state with a stamp file (recipe version, git ref, platform, build parameters). If any parameter changes, the dependency is rebuilt. The stamp system also generates CI cache keys like llvm-abc1234-release-ubuntu-24.04-clang-19-ASan. Refresh command. Because of the stamp system, a developer can run the bootstrap with --refresh-all at any time. The script re-evaluates all stamps and rebuilds only the dependencies that are out of date with whatever configurations are needed. This makes updating dependencies after a configuration change (new sanitizer, different compiler, updated LLVM commit) a single command rather than a manual process of figuring out which dependencies need rebuilding. What We Learned Users, developers, and CI now all use the same tool. Users get a one-liner installation. Developers get IDE run configurations and debugger integration. CI gets non-interactive mode with sanitizer support. The exact same code path that builds dependencies on a developer’s laptop now builds dependencies in CI. Separation of concerns. When your project’s requirements are complex enough (multiple build types, sanitizer variants, cross-platform quirks, heavy dependencies like LLVM), a custom script that owns the entire dependency lifecycle is simpler than trying to make a general-purpose tool handle every edge case. Existing tools solve the general case well. Our specific combination of requirements needed something tailored. C++ has no unified build workflow. Every platform has its own conventions for finding compilers, setting up environments, and linking libraries. Just finding and setting up MSVC from a script is a project in itself. New contributors can start working immediately. Before the bootstrap, getting a working build could take days. Now it takes a single command, and the IDE configurations are included. We still have small glitches as new compilers and platforms appear, but each fix is a localized change in one module rather than a cross-cutting update to five independent workflows. The complete bootstrap package is available in the MrDocs repository.</summary></entry><entry><title type="html">Joining Community, Detecting Communities, Making Community.</title><link href="http://cppalliance.org/arnaud/2026/04/08/Arnaud2026Q1Update.html" rel="alternate" type="text/html" title="Joining Community, Detecting Communities, Making Community." /><published>2026-04-08T00:00:00+00:00</published><updated>2026-04-08T00:00:00+00:00</updated><id>http://cppalliance.org/arnaud/2026/04/08/Arnaud2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/arnaud/2026/04/08/Arnaud2026Q1Update.html">&lt;h2 id=&quot;joining-community&quot;&gt;Joining Community&lt;/h2&gt;

&lt;p&gt;Early in Q1 2026, I joined the C++ Alliance. A very exciting moment.&lt;/p&gt;

&lt;p&gt;So I began to work early January under Joaquin’s mentorship, with the idea of having a clear contribution to Boost Graph by the end of Q1. 
After a few days of auditing the current state of the library versus the literature, it became clear that community detection methods 
(aka graph clustering algorithms) were sorely lacking for Boost.Graph, and that implementing one would be a great start 
to revitalizing the library and fill up maybe the largest methodological gap in its current algorithmic coverage.&lt;/p&gt;

&lt;h2 id=&quot;detecting-communities&quot;&gt;Detecting Communities&lt;/h2&gt;

&lt;p&gt;The vision was (and still is) simple: i) begin 
to implement Louvain algorithm, ii) build upon it to extend to the more complex Leiden algorithm, iii) finally get 
started with the Stochastic Block Model.&lt;/p&gt;

&lt;p&gt;If the plan is straightforward, the Louvain literature is not, and the BGL abstractions even less. 
But under the review and guidance from Joaquin and Jeremy Murphy (maintainer of the BGL), I was able to put up a satisfying implementation:&lt;/p&gt;

&lt;p&gt;Using the Newman-Girvan Modularity as the quality function to optimize, one can simply call:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;double Q = boost::louvain_clustering(
    g, cluster_map, weight_map, gen,
    boost::newman_and_girvan{},  // quality function (default)
    1e-7,                        // min_improvement_inner (per-pass convergence)
    0.0                          // min_improvement_outer (cross-level convergence)
);
// Q = 0.42, cluster_map = {0,0,0, 1,1,1}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As it happens often with heuristics, there is a large number of quality functions out there, and this is not 
because of a lack of consensus: in &lt;a href=&quot;https://www.cs.cornell.edu/home/kleinber/nips15.pdf&quot;&gt;a 2002 paper&lt;/a&gt;, 
computer scientist Jon Kleinberg proved that no clustering quality function 
(Modularity, Goldberg density, Surprise…) can simultaneously be:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;scale-invariant (doubling all edges should not change the clusters),&lt;/li&gt;
  &lt;li&gt;rich (all partitions should be achievable),&lt;/li&gt;
  &lt;li&gt;consistent (shortening distances inside a cluster and expanding distances between clusters should lead to similar results).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In other words, there is no way to implement a single function hoping it would exhibit three basic properties we would genuinely expect.
All we can do is to explore different trade-offs using different quality functions.&lt;/p&gt;

&lt;p&gt;So I left some doors open to be able to inject an arbitrary quality function. 
If this function exposes a minimal, “naive” interface, the algorithm will statically use a 
slow but generic path, and iterate across all the edges of the graph to compute the quality. 
It is slow, yes, but it makes the study of qualities easier, as one does not have to figure out 
the local mathematical decomposition of the function to get started with coding:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct my_quality {
    template &amp;lt;typename G, typename CMap, typename WMap&amp;gt;
    typename boost::property_traits&amp;lt;WMap&amp;gt;::value_type
    quality(const G&amp;amp; g, const CMap&amp;amp; c, const WMap&amp;amp; w) {
        // your custom partition quality function
    }
};

double Q = boost::louvain_clustering(g, cluster_map, weight_map, gen, my_quality{});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;However, the Louvain algorithm is extremely popular because it is fast, as it is able to update the 
quality computational state for each vertex it tries to “insert” or “remove” from a neighboring putative community. 
This &lt;em&gt;locality&lt;/em&gt; decomposition has to be figured out mathematically for each quality function, so it’s not trivial.&lt;/p&gt;

&lt;p&gt;I defined a &lt;code&gt;GraphPartitionQualityFunctionIncrementalConcept&lt;/code&gt; that refines the &lt;code&gt;GraphPartitionQualityFunctionConcept&lt;/code&gt; : 
if the algorithm detects that the injected quality function exposes an interface for this incremental update, 
the fast path is taken. One thing I figured out is that the &lt;code&gt;GraphPartitionQualityFunctionIncrementalConcept&lt;/code&gt; is for now too specific 
to the Modularity family. I am currently working on a proposal to increase its scope in future work.&lt;/p&gt;

&lt;p&gt;The current PR has been carefully tested and benchmarked for correctness and performance, and validated by 
Jeremy to be merged on develop branch.&lt;/p&gt;

&lt;p&gt;I wrote a paper to be submitted to the Journal of Open Source Software to publish the current results and benchmarks, 
as we are at least as fast as our competitors, and more generic. There is no equivalent I am aware of.&lt;/p&gt;

&lt;h2 id=&quot;making-community&quot;&gt;Making Community&lt;/h2&gt;

&lt;p&gt;Concurrently, I worked on summoning the Boost.Graph user base, and it quickly became clear a small local workshop would 
be a tremendous start: the Louvain algorithm community is based in Louvain (Belgium), its extension was 
formulated in Leiden (Netherlands) and my PhD graphs network is based in Paris (France) in what has been presented to me 
as “the Temple of the Stochastic Block Model” ! Quite a sign: life finds ways to run in (tight) circles.&lt;/p&gt;

&lt;p&gt;So the goal of this &lt;a href=&quot;https://github.com/boostorg/graph/discussions/466&quot;&gt;workshop&lt;/a&gt; is to bring together a small group 
(10-15 people) of researchers, open-source implementers, and industrial users for 
a day of honest conversation on May 6th 2026. Three questions will anchor the discussions:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;What types of graphs and data structures do you use in practice?&lt;/li&gt;
  &lt;li&gt;What performance, scalability, and interpretability requirements matter most to you?&lt;/li&gt;
  &lt;li&gt;What algorithms are missing today that Boost.Graph could offer?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ray and Collier from the C++ Alliance will also be there to record the lightning talks and document the process. 
It would also be the occasion to show off the python-based animations I put together for the &lt;a href=&quot;https://www.youtube.com/watch?v=-OVvzRFiYLU&quot;&gt;French C++ User Group 
presentation on March 24th&lt;/a&gt;. 
Those had a nice success and received many compliments, as it pairs well with the visual and 
dynamic nature of graphs and their algorithms, and I hope it will contribute 
to the repopularization of Boost.Graph.&lt;/p&gt;

&lt;p&gt;Graphliiings asseeeeemble !&lt;/p&gt;</content><author><name></name></author><category term="arnaud" /><summary type="html">Joining Community Early in Q1 2026, I joined the C++ Alliance. A very exciting moment. So I began to work early January under Joaquin’s mentorship, with the idea of having a clear contribution to Boost Graph by the end of Q1. After a few days of auditing the current state of the library versus the literature, it became clear that community detection methods (aka graph clustering algorithms) were sorely lacking for Boost.Graph, and that implementing one would be a great start to revitalizing the library and fill up maybe the largest methodological gap in its current algorithmic coverage. Detecting Communities The vision was (and still is) simple: i) begin to implement Louvain algorithm, ii) build upon it to extend to the more complex Leiden algorithm, iii) finally get started with the Stochastic Block Model. If the plan is straightforward, the Louvain literature is not, and the BGL abstractions even less. But under the review and guidance from Joaquin and Jeremy Murphy (maintainer of the BGL), I was able to put up a satisfying implementation: Using the Newman-Girvan Modularity as the quality function to optimize, one can simply call: double Q = boost::louvain_clustering( g, cluster_map, weight_map, gen, boost::newman_and_girvan{}, // quality function (default) 1e-7, // min_improvement_inner (per-pass convergence) 0.0 // min_improvement_outer (cross-level convergence) ); // Q = 0.42, cluster_map = {0,0,0, 1,1,1} As it happens often with heuristics, there is a large number of quality functions out there, and this is not because of a lack of consensus: in a 2002 paper, computer scientist Jon Kleinberg proved that no clustering quality function (Modularity, Goldberg density, Surprise…) can simultaneously be: scale-invariant (doubling all edges should not change the clusters), rich (all partitions should be achievable), consistent (shortening distances inside a cluster and expanding distances between clusters should lead to similar results). In other words, there is no way to implement a single function hoping it would exhibit three basic properties we would genuinely expect. All we can do is to explore different trade-offs using different quality functions. So I left some doors open to be able to inject an arbitrary quality function. If this function exposes a minimal, “naive” interface, the algorithm will statically use a slow but generic path, and iterate across all the edges of the graph to compute the quality. It is slow, yes, but it makes the study of qualities easier, as one does not have to figure out the local mathematical decomposition of the function to get started with coding: struct my_quality { template &amp;lt;typename G, typename CMap, typename WMap&amp;gt; typename boost::property_traits&amp;lt;WMap&amp;gt;::value_type quality(const G&amp;amp; g, const CMap&amp;amp; c, const WMap&amp;amp; w) { // your custom partition quality function } }; double Q = boost::louvain_clustering(g, cluster_map, weight_map, gen, my_quality{}); However, the Louvain algorithm is extremely popular because it is fast, as it is able to update the quality computational state for each vertex it tries to “insert” or “remove” from a neighboring putative community. This locality decomposition has to be figured out mathematically for each quality function, so it’s not trivial. I defined a GraphPartitionQualityFunctionIncrementalConcept that refines the GraphPartitionQualityFunctionConcept : if the algorithm detects that the injected quality function exposes an interface for this incremental update, the fast path is taken. One thing I figured out is that the GraphPartitionQualityFunctionIncrementalConcept is for now too specific to the Modularity family. I am currently working on a proposal to increase its scope in future work. The current PR has been carefully tested and benchmarked for correctness and performance, and validated by Jeremy to be merged on develop branch. I wrote a paper to be submitted to the Journal of Open Source Software to publish the current results and benchmarks, as we are at least as fast as our competitors, and more generic. There is no equivalent I am aware of. Making Community Concurrently, I worked on summoning the Boost.Graph user base, and it quickly became clear a small local workshop would be a tremendous start: the Louvain algorithm community is based in Louvain (Belgium), its extension was formulated in Leiden (Netherlands) and my PhD graphs network is based in Paris (France) in what has been presented to me as “the Temple of the Stochastic Block Model” ! Quite a sign: life finds ways to run in (tight) circles. So the goal of this workshop is to bring together a small group (10-15 people) of researchers, open-source implementers, and industrial users for a day of honest conversation on May 6th 2026. Three questions will anchor the discussions: What types of graphs and data structures do you use in practice? What performance, scalability, and interpretability requirements matter most to you? What algorithms are missing today that Boost.Graph could offer? Ray and Collier from the C++ Alliance will also be there to record the lightning talks and document the process. It would also be the occasion to show off the python-based animations I put together for the French C++ User Group presentation on March 24th. Those had a nice success and received many compliments, as it pairs well with the visual and dynamic nature of graphs and their algorithms, and I hope it will contribute to the repopularization of Boost.Graph. Graphliiings asseeeeemble !</summary></entry><entry><title type="html">Mr.Docs: Niebloids, Reflection, Code Removal, New XML Generator</title><link href="http://cppalliance.org/gennaro/2026/04/06/Gennaros2026Q1Update.html" rel="alternate" type="text/html" title="Mr.Docs: Niebloids, Reflection, Code Removal, New XML Generator" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>http://cppalliance.org/gennaro/2026/04/06/Gennaros2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/gennaro/2026/04/06/Gennaros2026Q1Update.html">&lt;p&gt;This quarter, I focused on two areas of Mr.Docs: adding first-class support for
function objects, the pattern behind C++20 Niebloids and Ranges CPOs, and
overhauling how the tool turns C++ metadata into documentation output (the
reflection layer).&lt;/p&gt;

&lt;h2 id=&quot;function-objects-documenting-what-users-actually-call&quot;&gt;Function objects: documenting what users actually call&lt;/h2&gt;

&lt;p&gt;In modern C++ libraries, many “functions” are actually global objects whose type
has &lt;code&gt;operator()&lt;/code&gt; overloads. The Ranges library, for instance, defines
&lt;code&gt;std::ranges::sort()&lt;/code&gt; not as a function template but as a variable of some
unspecified callable type. Users call it like a function and expect it to be
documented like one. Before this quarter, Mr.Docs didn’t know the difference: it
would document the variable and its cryptic implementation type.&lt;/p&gt;

&lt;p&gt;The new function-object support (roughly 4,600 lines across 38 files) bridges
this gap. When Mr.Docs encounters a variable whose type is a record with no
public members but &lt;code&gt;operator()&lt;/code&gt; overloads and special member functions, it now
synthesizes free-function documentation entries named after the variable. The
underlying type is marked implementation-defined and hidden from the output.
Multi-overload function objects are naturally grouped by the existing overload
machinery. So, given:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct abs_fn {
    double operator()(double x) const noexcept;
};
inline constexpr abs_fn abs = {};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Mr.Docs documents it as simply:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;double abs(double x) noexcept;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For cases where auto-detection isn’t quite right — for example, when the type
has extra public members — library authors can use the new &lt;code&gt;@functionobject&lt;/code&gt; or
&lt;code&gt;@functor&lt;/code&gt; doc commands. There is also an &lt;code&gt;auto-function-objects&lt;/code&gt; config option
to control the behavior globally. The feature comes with a comprehensive test
fixture covering single and multi-overload function objects, templated types,
and types that live in nested &lt;code&gt;detail&lt;/code&gt; namespaces.&lt;/p&gt;

&lt;h2 id=&quot;reflection-from-boilerplate-to-a-single-generic-template&quot;&gt;Reflection: from boilerplate to a single generic template&lt;/h2&gt;

&lt;p&gt;The bigger effort — and the one that kept surprising me with its scope — was the
reflection refactoring. Mr.Docs converts its internal C++ metadata into a DOM (a
tree of lazy objects) that drives the Handlebars template engine. Before this
quarter, every type in the system required a hand-written &lt;code&gt;tag_invoke()&lt;/code&gt;
overload: one function to map the type’s fields to DOM properties, another to
convert it to a &lt;code&gt;dom::Value&lt;/code&gt;. Adding a new symbol kind meant touching half a
dozen files and following a pattern that was easy to get wrong.&lt;/p&gt;

&lt;p&gt;The goal was simple to state: replace all of that with a single generic template
that works for any type carrying a describe macro.&lt;/p&gt;

&lt;h3 id=&quot;phase-1-boostdescribe&quot;&gt;Phase 1: Boost.Describe&lt;/h3&gt;

&lt;p&gt;The first attempt used Boost.Describe. I added &lt;code&gt;BOOST_DESCRIBE_STRUCT()&lt;/code&gt;
annotations to every metadata type and wrote generic &lt;code&gt;merge()&lt;/code&gt; and
&lt;code&gt;mapReflectedType()&lt;/code&gt; templates that iterated over the described members. This
proved the concept and eliminated a great deal of boilerplate. However, we
didn’t want a public dependency on Boost.Describe, which meant the dependency
was hidden in .cpp files and couldn’t be used in templates living in public
heades,&lt;/p&gt;

&lt;h3 id=&quot;phase-2-custom-reflection-macros&quot;&gt;Phase 2: custom reflection macros&lt;/h3&gt;

&lt;p&gt;So I wrote our own. &lt;code&gt;MRDOCS_DESCRIBE_STRUCT()&lt;/code&gt; and &lt;code&gt;MRDOCS_DESCRIBE_CLASS()&lt;/code&gt;
provide the same compile-time member and base-class iteration as Boost.Describe,
but with no external dependency. The macros live in &lt;code&gt;Describe.hpp&lt;/code&gt; and produce
&lt;code&gt;constexpr&lt;/code&gt; descriptor lists that the rest of the system iterates with
&lt;code&gt;describe::for_each()&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;phase-3-removing-the-overloads&quot;&gt;Phase 3: removing the overloads&lt;/h3&gt;

&lt;p&gt;With the describe macros in place, I could write generic implementations of
&lt;code&gt;tag_invoke()&lt;/code&gt; for both &lt;code&gt;LazyObjectMapTag&lt;/code&gt; (DOM mapping) and &lt;code&gt;ValueFromTag&lt;/code&gt;
(value conversion), plus a generic &lt;code&gt;merge()&lt;/code&gt;. Each one replaces dozens of
per-type overloads with a single constrained template. The &lt;code&gt;mapMember()&lt;/code&gt;
function handles the dispatch: optionals are unwrapped, vectors become lazy
arrays, described enums become kebab-case strings, and compound described types
become lazy objects — all automatically.&lt;/p&gt;

&lt;p&gt;Removing the overloads was not as straightforward as I had hoped. The old
overloads were entangled with:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;The Handlebars templates&lt;/strong&gt;, which assumed specific DOM property names.
Renaming &lt;code&gt;symbol&lt;/code&gt; to &lt;code&gt;id&lt;/code&gt;, &lt;code&gt;type&lt;/code&gt; to &lt;code&gt;underlyingType&lt;/code&gt;, and &lt;code&gt;description&lt;/code&gt; to
&lt;code&gt;document&lt;/code&gt; required updating templates and golden tests in lockstep.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The XML generator&lt;/strong&gt;, which silently skipped types that weren’t described.
Adding &lt;code&gt;MRDOCS_DESCRIBE_STRUCT()&lt;/code&gt; to &lt;code&gt;TemplateInfo&lt;/code&gt; and &lt;code&gt;MemberPointerType&lt;/code&gt;
made the XML output more complete, requiring schema updates and golden-test
regeneration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;the-result&quot;&gt;The result&lt;/h3&gt;

&lt;p&gt;Out of the original 39 custom &lt;code&gt;tag_invoke(LazyObjectMapTag)&lt;/code&gt; overloads, only 7
remain — each with genuinely non-reflectable logic (computed properties,
polymorphic dispatch, or member decomposition). Roughly 60
&lt;code&gt;tag_invoke(ValueFromTag)&lt;/code&gt; boilerplate overloads were also removed. Adding a new
metadata type to Mr.Docs now requires nothing beyond &lt;code&gt;MRDOCS_DESCRIBE_STRUCT()&lt;/code&gt;
at the point of definition.&lt;/p&gt;

&lt;h2 id=&quot;the-xml-generator-a-full-rewrite-in-350-lines&quot;&gt;The XML Generator: a full rewrite in 350 lines&lt;/h2&gt;

&lt;p&gt;The XML generator was the first major payoff of the reflection work (although it
was initially done when we were using Boost.Describe). The old generator had its
own hand-written serialization for every metadata type, completely independent
of the DOM layer. It was a parallel set of per-type functions that had to be
kept in sync with every schema change.&lt;/p&gt;

&lt;p&gt;I replaced it with a generic implementation built entirely on the describe
macros. The core is about 350 lines: &lt;code&gt;writeMembers()&lt;/code&gt; walks &lt;code&gt;describe_bases&lt;/code&gt; and
&lt;code&gt;describe_members&lt;/code&gt;, &lt;code&gt;writeElement()&lt;/code&gt; dispatches on type traits for primitives,
optionals, vectors, and enums, and &lt;code&gt;writePolymorphic()&lt;/code&gt; handles the handful of
type hierarchies (&lt;code&gt;Type&lt;/code&gt;, &lt;code&gt;TParam&lt;/code&gt;, &lt;code&gt;TArg&lt;/code&gt;, &lt;code&gt;Block&lt;/code&gt;, &lt;code&gt;Inline&lt;/code&gt;) via
.inc-generated switches. The old generator needed a new function for every type;
the new one handles them all, and the 241 files changed in that commit were
almost entirely golden-test updates reflecting the now-more-complete and totally
changed output.&lt;/p&gt;

&lt;h2 id=&quot;smaller-fixes&quot;&gt;Smaller fixes&lt;/h2&gt;

&lt;p&gt;Alongside the two main efforts, I fixed several bugs that came up during
development or were reported by users:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Markdown inline formatting (bold, italic, code) and bullet lists were not
rendering correctly in certain combinations.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;&amp;lt;pre&amp;gt;&lt;/code&gt; tags were missing around HTML code blocks.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;bottomUpTraverse()&lt;/code&gt; was silently skipping &lt;code&gt;ListBlock&lt;/code&gt; items, causing
doc-comment content to be lost.&lt;/li&gt;
  &lt;li&gt;Several CI improvements: faster PR demos, better failure detection, increased
test coverage for the XML generator.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;looking-ahead&quot;&gt;Looking ahead&lt;/h2&gt;

&lt;p&gt;The reflection infrastructure is now in good shape, and most of the mechanical
boilerplate is gone. The remaining &lt;code&gt;tag_invoke()&lt;/code&gt; overloads are genuinely custom
— they compute properties that don’t exist as C++ members, or they dispatch
polymorphically across type hierarchies. Those are worth keeping. Going forward,
I’d like to explore whether the describe macros can replace more of the manual
visitor code throughout the codebase.&lt;/p&gt;

&lt;p&gt;As always, feedback and suggestions are welcome — feel free to open an issue or
reach out on Slack.&lt;/p&gt;</content><author><name></name></author><category term="gennaro" /><summary type="html">This quarter, I focused on two areas of Mr.Docs: adding first-class support for function objects, the pattern behind C++20 Niebloids and Ranges CPOs, and overhauling how the tool turns C++ metadata into documentation output (the reflection layer). Function objects: documenting what users actually call In modern C++ libraries, many “functions” are actually global objects whose type has operator() overloads. The Ranges library, for instance, defines std::ranges::sort() not as a function template but as a variable of some unspecified callable type. Users call it like a function and expect it to be documented like one. Before this quarter, Mr.Docs didn’t know the difference: it would document the variable and its cryptic implementation type. The new function-object support (roughly 4,600 lines across 38 files) bridges this gap. When Mr.Docs encounters a variable whose type is a record with no public members but operator() overloads and special member functions, it now synthesizes free-function documentation entries named after the variable. The underlying type is marked implementation-defined and hidden from the output. Multi-overload function objects are naturally grouped by the existing overload machinery. So, given: struct abs_fn { double operator()(double x) const noexcept; }; inline constexpr abs_fn abs = {}; Mr.Docs documents it as simply: double abs(double x) noexcept; For cases where auto-detection isn’t quite right — for example, when the type has extra public members — library authors can use the new @functionobject or @functor doc commands. There is also an auto-function-objects config option to control the behavior globally. The feature comes with a comprehensive test fixture covering single and multi-overload function objects, templated types, and types that live in nested detail namespaces. Reflection: from boilerplate to a single generic template The bigger effort — and the one that kept surprising me with its scope — was the reflection refactoring. Mr.Docs converts its internal C++ metadata into a DOM (a tree of lazy objects) that drives the Handlebars template engine. Before this quarter, every type in the system required a hand-written tag_invoke() overload: one function to map the type’s fields to DOM properties, another to convert it to a dom::Value. Adding a new symbol kind meant touching half a dozen files and following a pattern that was easy to get wrong. The goal was simple to state: replace all of that with a single generic template that works for any type carrying a describe macro. Phase 1: Boost.Describe The first attempt used Boost.Describe. I added BOOST_DESCRIBE_STRUCT() annotations to every metadata type and wrote generic merge() and mapReflectedType() templates that iterated over the described members. This proved the concept and eliminated a great deal of boilerplate. However, we didn’t want a public dependency on Boost.Describe, which meant the dependency was hidden in .cpp files and couldn’t be used in templates living in public heades, Phase 2: custom reflection macros So I wrote our own. MRDOCS_DESCRIBE_STRUCT() and MRDOCS_DESCRIBE_CLASS() provide the same compile-time member and base-class iteration as Boost.Describe, but with no external dependency. The macros live in Describe.hpp and produce constexpr descriptor lists that the rest of the system iterates with describe::for_each(). Phase 3: removing the overloads With the describe macros in place, I could write generic implementations of tag_invoke() for both LazyObjectMapTag (DOM mapping) and ValueFromTag (value conversion), plus a generic merge(). Each one replaces dozens of per-type overloads with a single constrained template. The mapMember() function handles the dispatch: optionals are unwrapped, vectors become lazy arrays, described enums become kebab-case strings, and compound described types become lazy objects — all automatically. Removing the overloads was not as straightforward as I had hoped. The old overloads were entangled with: The Handlebars templates, which assumed specific DOM property names. Renaming symbol to id, type to underlyingType, and description to document required updating templates and golden tests in lockstep. The XML generator, which silently skipped types that weren’t described. Adding MRDOCS_DESCRIBE_STRUCT() to TemplateInfo and MemberPointerType made the XML output more complete, requiring schema updates and golden-test regeneration. The result Out of the original 39 custom tag_invoke(LazyObjectMapTag) overloads, only 7 remain — each with genuinely non-reflectable logic (computed properties, polymorphic dispatch, or member decomposition). Roughly 60 tag_invoke(ValueFromTag) boilerplate overloads were also removed. Adding a new metadata type to Mr.Docs now requires nothing beyond MRDOCS_DESCRIBE_STRUCT() at the point of definition. The XML Generator: a full rewrite in 350 lines The XML generator was the first major payoff of the reflection work (although it was initially done when we were using Boost.Describe). The old generator had its own hand-written serialization for every metadata type, completely independent of the DOM layer. It was a parallel set of per-type functions that had to be kept in sync with every schema change. I replaced it with a generic implementation built entirely on the describe macros. The core is about 350 lines: writeMembers() walks describe_bases and describe_members, writeElement() dispatches on type traits for primitives, optionals, vectors, and enums, and writePolymorphic() handles the handful of type hierarchies (Type, TParam, TArg, Block, Inline) via .inc-generated switches. The old generator needed a new function for every type; the new one handles them all, and the 241 files changed in that commit were almost entirely golden-test updates reflecting the now-more-complete and totally changed output. Smaller fixes Alongside the two main efforts, I fixed several bugs that came up during development or were reported by users: Markdown inline formatting (bold, italic, code) and bullet lists were not rendering correctly in certain combinations. &amp;lt;pre&amp;gt; tags were missing around HTML code blocks. bottomUpTraverse() was silently skipping ListBlock items, causing doc-comment content to be lost. Several CI improvements: faster PR demos, better failure detection, increased test coverage for the XML generator. Looking ahead The reflection infrastructure is now in good shape, and most of the mechanical boilerplate is gone. The remaining tag_invoke() overloads are genuinely custom — they compute properties that don’t exist as C++ members, or they dispatch polymorphically across type hierarchies. Those are worth keeping. Going forward, I’d like to explore whether the describe macros can replace more of the manual visitor code throughout the codebase. As always, feedback and suggestions are welcome — feel free to open an issue or reach out on Slack.</summary></entry><entry><title type="html">Speed and Safety</title><link href="http://cppalliance.org/matt/2026/04/06/Matts2026Q1Update.html" rel="alternate" type="text/html" title="Speed and Safety" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>http://cppalliance.org/matt/2026/04/06/Matts2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/matt/2026/04/06/Matts2026Q1Update.html">&lt;p&gt;In my &lt;a href=&quot;https://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html&quot;&gt;last post&lt;/a&gt; I mentioned that &lt;a href=&quot;https://github.com/cppalliance/int128&quot;&gt;int128&lt;/a&gt; library would be getting CUDA support in the future.
The good news is that the future is now!
Nearly all the functions in the library are available on both host and device.
Any function that has &lt;code&gt;BOOST_INT128_HOST_DEVICE&lt;/code&gt; in its signature in the &lt;a href=&quot;https://develop.int128.cpp.al/overview.html&quot;&gt;documentation&lt;/a&gt; is available for usage.
&lt;a href=&quot;https://develop.int128.cpp.al/examples.html#examples_cuda&quot;&gt;An example&lt;/a&gt; of how to use the types in the CUDA kernels has been added as well.
These can be as simple as:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;using test_type = boost::int128::uint128_t;

__global__ void cuda_mul(const test_type* in1, const test_type* in2, test_type* out, int num_elements)
{
    int i = blockDim.x * blockIdx.x + threadIdx.x;

    if (i &amp;lt; num_elements)
    {
        out[i] = in1[i] * in2[i];
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Other Boost libraries are or will be beneficiaries of this effort as well.
First, Boost.Charconv now supports &lt;code&gt;boost::charconv::from_chars&lt;/code&gt; and &lt;code&gt;boost::charconv::to_chars&lt;/code&gt; for integers being run on device.
This can give you up to an order of magnitude improvement in performance.
These results and benchmarks are available in the &lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/charconv/doc/html/charconv.html&quot;&gt;Boost.Charconv documentation&lt;/a&gt;.
Next, in the coming months Boost.Decimal will gain CUDA support as part of this effort.
We think users will benefit greatly from being able to perform massively parallel parsing, serialization, and calculations on decimal numbers.
Stay tuned for this likely in Boost 1.92.
In the meantime, enjoy the initial release of Decimal coming in Boost 1.91!&lt;/p&gt;

&lt;p&gt;On the other side of the performance that we’re looking to deliver in coming versions of Boost, we must not forget the importance of safety.
There exist plenty of &lt;a href=&quot;https://en.wikipedia.org/wiki/Integer_overflow#Examples&quot;&gt;examples of damage and death&lt;/a&gt; caused by arithmetic errors in computer programs.
Can we create a library that provides guaranteed safety in arithmetic while minimizing performance losses and integration friction?
How does one guarantee the behavior of their types?
In our implementation, &lt;a href=&quot;https://github.com/cppalliance/safe_numbers&quot;&gt;Boost.Safe_Numbers&lt;/a&gt;, we are investigating the usage of the &lt;a href=&quot;https://why3.org&quot;&gt;Why3&lt;/a&gt; platform for deductive program verification.
By pursuing these formal methods, safety can have real meaning.
We will continue to provide additional details as part of the &lt;a href=&quot;https://develop.safe-numbers.cpp.al/verification.html&quot;&gt;formal verification page&lt;/a&gt; of our documentation.
Since inevitably the library will cause an increase in the number of errors (which is a good thing), we aim to fail as early as possible, and when we do provide the most helpful error message that we can.
For example, we have some static arithmetic errors reported in as few as three lines:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;clang-darwin.compile.c++ ../../../bin.v2/libs/safe_numbers/test/compile_fail_basic_usage_constexpr.test/clang-darwin-21/debug/arm_64/cxxstd-20-iso/threading-multi/visibility-hidden/compile_fail_basic_usage_constexpr.o
../examples/compile_fail_basic_usage_constexpr.cpp:18:22: error: constexpr variable 'z' must be initialized by a constant expression
   18 |         constexpr u8 z {x + y};
      |                      ^ ~~~~~~~
../../../boost/safe_numbers/detail/unsigned_integer_basis.hpp:397:17: note: subexpression not valid in a constant expression
  397 |                 throw std::overflow_error(&quot;Overflow detected in u8 addition&quot;);
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../examples/compile_fail_basic_usage_constexpr.cpp:18:25: note: in call to 'operator+&amp;lt;unsigned char&amp;gt;({255}, {2})'
   18 |         constexpr u8 z {x + y};
      |                         ^~~~~
1 error generated.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Our runtime error reporting system fundamentally uses Boost.Throw_Exception so it can report not only the type, operation, file and line, but also up to an entire stack trace when leveraging the optional linking with Boost.Stacktrace.
Not to forget our discussion of CUDA so quickly, the Safe_Numbers library will have CUDA support.
One thing that we will continue to refine is synchronizing error reporting on device as one cannot throw an exception on device.&lt;/p&gt;

&lt;p&gt;We are always looking for users of all the libraries discussed.
If you are a current or prospective user, feel free to reach out and let us know what you’re using it for, or any issues that you find.&lt;/p&gt;</content><author><name></name></author><category term="matt" /><summary type="html">In my last post I mentioned that int128 library would be getting CUDA support in the future. The good news is that the future is now! Nearly all the functions in the library are available on both host and device. Any function that has BOOST_INT128_HOST_DEVICE in its signature in the documentation is available for usage. An example of how to use the types in the CUDA kernels has been added as well. These can be as simple as: using test_type = boost::int128::uint128_t; __global__ void cuda_mul(const test_type* in1, const test_type* in2, test_type* out, int num_elements) { int i = blockDim.x * blockIdx.x + threadIdx.x; if (i &amp;lt; num_elements) { out[i] = in1[i] * in2[i]; } } Other Boost libraries are or will be beneficiaries of this effort as well. First, Boost.Charconv now supports boost::charconv::from_chars and boost::charconv::to_chars for integers being run on device. This can give you up to an order of magnitude improvement in performance. These results and benchmarks are available in the Boost.Charconv documentation. Next, in the coming months Boost.Decimal will gain CUDA support as part of this effort. We think users will benefit greatly from being able to perform massively parallel parsing, serialization, and calculations on decimal numbers. Stay tuned for this likely in Boost 1.92. In the meantime, enjoy the initial release of Decimal coming in Boost 1.91! On the other side of the performance that we’re looking to deliver in coming versions of Boost, we must not forget the importance of safety. There exist plenty of examples of damage and death caused by arithmetic errors in computer programs. Can we create a library that provides guaranteed safety in arithmetic while minimizing performance losses and integration friction? How does one guarantee the behavior of their types? In our implementation, Boost.Safe_Numbers, we are investigating the usage of the Why3 platform for deductive program verification. By pursuing these formal methods, safety can have real meaning. We will continue to provide additional details as part of the formal verification page of our documentation. Since inevitably the library will cause an increase in the number of errors (which is a good thing), we aim to fail as early as possible, and when we do provide the most helpful error message that we can. For example, we have some static arithmetic errors reported in as few as three lines: clang-darwin.compile.c++ ../../../bin.v2/libs/safe_numbers/test/compile_fail_basic_usage_constexpr.test/clang-darwin-21/debug/arm_64/cxxstd-20-iso/threading-multi/visibility-hidden/compile_fail_basic_usage_constexpr.o ../examples/compile_fail_basic_usage_constexpr.cpp:18:22: error: constexpr variable 'z' must be initialized by a constant expression 18 | constexpr u8 z {x + y}; | ^ ~~~~~~~ ../../../boost/safe_numbers/detail/unsigned_integer_basis.hpp:397:17: note: subexpression not valid in a constant expression 397 | throw std::overflow_error(&quot;Overflow detected in u8 addition&quot;); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../examples/compile_fail_basic_usage_constexpr.cpp:18:25: note: in call to 'operator+&amp;lt;unsigned char&amp;gt;({255}, {2})' 18 | constexpr u8 z {x + y}; | ^~~~~ 1 error generated. Our runtime error reporting system fundamentally uses Boost.Throw_Exception so it can report not only the type, operation, file and line, but also up to an entire stack trace when leveraging the optional linking with Boost.Stacktrace. Not to forget our discussion of CUDA so quickly, the Safe_Numbers library will have CUDA support. One thing that we will continue to refine is synchronizing error reporting on device as one cannot throw an exception on device. We are always looking for users of all the libraries discussed. If you are a current or prospective user, feel free to reach out and let us know what you’re using it for, or any issues that you find.</summary></entry><entry><title type="html">The road to C++20 modules, Capy and Redis</title><link href="http://cppalliance.org/ruben/2026/04/06/Ruben2026Q1Update.html" rel="alternate" type="text/html" title="The road to C++20 modules, Capy and Redis" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>http://cppalliance.org/ruben/2026/04/06/Ruben2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/ruben/2026/04/06/Ruben2026Q1Update.html">&lt;h2 id=&quot;modules-in-using-stdcpp-2026&quot;&gt;Modules in using std::cpp 2026&lt;/h2&gt;

&lt;p&gt;C++20 modules have been in the standard for 6 years already, but we’re not seeing
widespread adoption. The ecosystem is still getting ready. As a quick example,
&lt;code&gt;import std&lt;/code&gt;, an absolute blessing for compile times, requires build system support,
and this is still experimental as of CMake 4.3.1.&lt;/p&gt;

&lt;p&gt;And yet, I’ve realized that writing module-native applications is really enjoyable.
The system is well-thought and allows for better encapsulation,
just as you’d write in a modern programming language.
I’ve been using my &lt;a href=&quot;https://github.com/anarthal/servertech-chat/tree/feature/cxx20-modules&quot;&gt;Servertech Chat project&lt;/a&gt;
(a webserver that uses Boost.Asio and companion libraries) to get a taste
of what modules really look like in real code.&lt;/p&gt;

&lt;p&gt;When writing this, I saw clearly that having big dependencies that can’t be consumed
via &lt;code&gt;import&lt;/code&gt; is a big problem. With the scheme I used, compile times got 66% worse
instead of improving. This is because when writing modules, you tend to have
a bigger number of translation units. These are supposed to be much more lightweight,
but if you’re relying on &lt;code&gt;#include&lt;/code&gt; for third-party libraries, they’re not.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;//
// File: redis_client.cppm. Contains only the interface declaration (somehow like headers do)
//
module;

// No import boost yet - must be in the global module fragment
#include &amp;lt;boost/asio/awaitable.hpp&amp;gt;
#include &amp;lt;boost/system/result.hpp&amp;gt;

module servertech_chat:redis_client;
import std;

namespace chat {

class redis_client
{
public:
    virtual ~redis_client() {}
    virtual boost::asio::awaitable&amp;lt;boost::system::result&amp;lt;std::int64_t&amp;gt;&amp;gt; get_int_key(std::string_view key) = 0;
    // ...
};

}

//
// File: redis_client.cpp. Contains the implementation
//
module;

#include &amp;lt;boost/redis/connection.hpp&amp;gt;

module servertech_chat;
import :redis_client;
import std;

namespace {

class redis_client_impl final : public redis_client { /* ... */ };

}

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I analyze this in much more depth in
&lt;a href=&quot;https://youtu.be/hD9JHkt7e2Y&quot;&gt;the talk I’ve had the pleasure to give at using std::cpp&lt;/a&gt;
this March in Madrid. The TL;DR is that supporting &lt;code&gt;import boost&lt;/code&gt; natively
is very important for any serious usage of Boost in the modules world.&lt;/p&gt;

&lt;h2 id=&quot;import-boost-is-upon-us&quot;&gt;&lt;code&gt;import boost&lt;/code&gt; is upon us&lt;/h2&gt;

&lt;p&gt;As you may know, I prefer doing to saying, and I’ve been writing a prototype to support
&lt;code&gt;import boost&lt;/code&gt; natively while keeping today’s header code as is. This prototype has
seen substantial advancements during these months.&lt;/p&gt;

&lt;p&gt;I’ve developed a &lt;a href=&quot;https://github.com/anarthal/boost-cmake/blob/feature/cxx20-modules/modules.md&quot;&gt;systematic approach for modularization&lt;/a&gt;,
and we’ve settled for the ABI-breaking style, with compatibility headers.
I’ve added support for GCC (the remaining compiler) to the core libraries
that we already supported (Config, Mp11, Core, Assert, ThrowException, Charconv),
and I’ve added modular bindings for Variant2, Compat, Endian, System, TypeTraits,
Optional, ContainerHash, IO and Asio.
These are only tested under Clang yet - it’s part of a discovery process.
The idea is modularizing the flagship libraries
to verify that the approach works, and to measure compile time improvements.&lt;/p&gt;

&lt;p&gt;There is still a lot to do before things become functional.
I’ve received helpful feedback from many community members, which has been invaluable.&lt;/p&gt;

&lt;h2 id=&quot;redis-meets-capy&quot;&gt;Redis meets Capy&lt;/h2&gt;

&lt;p&gt;If you’re a user of Boost.Asio and coroutines, you probably know that there’s a new player
in town - Capy and Corosio. They’re a coroutines-native Asio replacement which promise
a range of benefits, from improved expressiveness to saner compile times,
without performance loss.&lt;/p&gt;

&lt;p&gt;Since I maintain Boost.MySQL and co-maintain Boost.Redis, I know the pain of writing
operations using the universal Asio model. Lifetime management is difficult to follow,
testing is complex, and things must remain header-only (and usually heavily templatized).
Coroutine code is much simpler to write and understand, and it’s what I use whenever I can.
So obviously I’m interested in this project.&lt;/p&gt;

&lt;p&gt;My long-term idea is creating a v2 version of MySQL and Redis that exposes a Capy/Corosio
interface. As a proof-of-concept, I migrated Boost.Redis and some of its tests.
Still some polishing needed, but - it works!
You can read the &lt;a href=&quot;https://lists.boost.org/archives/list/boost@lists.boost.org/thread/FSX5H3MDQSLO3VZFEOUINUZPYQFCIASB/&quot;&gt;full report on the Boost mailing list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some sample code as an appetizer:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;
capy::task&amp;lt;void&amp;gt; run_request(connection&amp;amp; conn)
{
    // A request containing only a ping command.
    request req;
    req.push(&quot;PING&quot;, &quot;Hello world&quot;);

    // Response where the PONG response will be stored.
    response&amp;lt;std::string&amp;gt; resp;

    // Executes the request.
    auto [ec] = co_await conn.exec(req, resp);
    if (ec)
        co_return;
    std::cout &amp;lt;&amp;lt; &quot;PING value: &quot; &amp;lt;&amp;lt; std::get&amp;lt;0&amp;gt;(resp).value() &amp;lt;&amp;lt; std::endl;
}

capy::task&amp;lt;void&amp;gt; co_main()
{
    connection conn{(co_await capy::this_coro::executor).context()};
    co_await capy::when_any(
        // Sends the request
        run_request(conn),

        // Performs connection establishment, re-connection, pings...
        conn.run(config{})
    );
}

&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;redis-pubsub-improvements&quot;&gt;Redis PubSub improvements&lt;/h2&gt;

&lt;p&gt;Working with PubSub messages in Boost.Redis has always been more involved than in other libraries.
For example, we support transparent reconnection, but (before 1.91), the user had to explicitly
re-establish subscriptions:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.push(&quot;SUBSCRIBE&quot;, &quot;channel&quot;);
while (conn-&amp;gt;will_reconnect()) {
    // Reconnect to the channels.
    co_await conn-&amp;gt;async_exec(req, ignore);

    // ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Boost 1.91 has added PubSub state restoration. A fancy name but an easy feature:
established subscriptions are recorded, and when a reconnection happens,
the subscription is re-established automatically:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Subscribe to the channel 'mychannel'. If a re-connection happens,
// an appropriate SUBSCRIBE command is issued to re-establish the subscription.
request req;
req.subscribe({&quot;mychannel&quot;});
co_await conn-&amp;gt;async_exec(req);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Boost 1.91 also adds &lt;code&gt;flat_tree&lt;/code&gt;, a specialized container for Redis messages
with an emphasis on memory-reuse, performance and usability.
This container is especially appropriate when dealing with PubSub.
We’ve also added &lt;code&gt;connection::async_receive2()&lt;/code&gt;, a higher-performance
replacement for &lt;code&gt;connection::async_receive()&lt;/code&gt; that consumes messages in batches,
rather than one-by-one, eliminating re-scheduling overhead.
And &lt;code&gt;push_parser&lt;/code&gt;, a view to transform raw RESP3 nodes into user-friendly structures.&lt;/p&gt;

&lt;p&gt;With these improvements, code goes from:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Loop while reconnection is enabled
while (conn-&amp;gt;will_reconnect()) {

    // Reconnect to channels.
    co_await conn-&amp;gt;async_exec(req, ignore);

    // Loop reading Redis pushs messages.
    for (error_code ec;;) {
        // First try to read any buffered pushes.
        conn-&amp;gt;receive(ec);
        if (ec == error::sync_receive_push_failed) {
            ec = {};

            // Wait for pushes
            co_await conn-&amp;gt;async_receive(asio::redirect_error(asio::use_awaitable, ec));
        }

        if (ec)
            break;  // Connection lost, break so we can reconnect to channels.

        // Left to the user: resp contains raw RESP3 nodes, which need to be parsed manually!

        // Remove the nodes corresponding to one message
        consume_one(resp);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Loop to read Redis push messages.
while (conn-&amp;gt;will_reconnect()) {
    // No need to reconnect, we now have PubSub state restoration
    // Wait for pushes
    auto [ec] = co_await conn-&amp;gt;async_receive2(asio::as_tuple);
    if (ec)
        break; // Cancelled

    // Consume the messages
    for (push_view elem : push_parser(resp.value()))
        std::cout &amp;lt;&amp;lt; &quot;Received message from channel &quot; &amp;lt;&amp;lt; elem.channel &amp;lt;&amp;lt; &quot;: &quot; &amp;lt;&amp;lt; elem.payload &amp;lt;&amp;lt; &quot;\n&quot;;

    // Clear all the batch
    resp.value().clear();
}
&lt;/code&gt;&lt;/pre&gt;</content><author><name></name></author><category term="ruben" /><summary type="html">Modules in using std::cpp 2026 C++20 modules have been in the standard for 6 years already, but we’re not seeing widespread adoption. The ecosystem is still getting ready. As a quick example, import std, an absolute blessing for compile times, requires build system support, and this is still experimental as of CMake 4.3.1. And yet, I’ve realized that writing module-native applications is really enjoyable. The system is well-thought and allows for better encapsulation, just as you’d write in a modern programming language. I’ve been using my Servertech Chat project (a webserver that uses Boost.Asio and companion libraries) to get a taste of what modules really look like in real code. When writing this, I saw clearly that having big dependencies that can’t be consumed via import is a big problem. With the scheme I used, compile times got 66% worse instead of improving. This is because when writing modules, you tend to have a bigger number of translation units. These are supposed to be much more lightweight, but if you’re relying on #include for third-party libraries, they’re not. For example: // // File: redis_client.cppm. Contains only the interface declaration (somehow like headers do) // module; // No import boost yet - must be in the global module fragment #include &amp;lt;boost/asio/awaitable.hpp&amp;gt; #include &amp;lt;boost/system/result.hpp&amp;gt; module servertech_chat:redis_client; import std; namespace chat { class redis_client { public: virtual ~redis_client() {} virtual boost::asio::awaitable&amp;lt;boost::system::result&amp;lt;std::int64_t&amp;gt;&amp;gt; get_int_key(std::string_view key) = 0; // ... }; } // // File: redis_client.cpp. Contains the implementation // module; #include &amp;lt;boost/redis/connection.hpp&amp;gt; module servertech_chat; import :redis_client; import std; namespace { class redis_client_impl final : public redis_client { /* ... */ }; } I analyze this in much more depth in the talk I’ve had the pleasure to give at using std::cpp this March in Madrid. The TL;DR is that supporting import boost natively is very important for any serious usage of Boost in the modules world. import boost is upon us As you may know, I prefer doing to saying, and I’ve been writing a prototype to support import boost natively while keeping today’s header code as is. This prototype has seen substantial advancements during these months. I’ve developed a systematic approach for modularization, and we’ve settled for the ABI-breaking style, with compatibility headers. I’ve added support for GCC (the remaining compiler) to the core libraries that we already supported (Config, Mp11, Core, Assert, ThrowException, Charconv), and I’ve added modular bindings for Variant2, Compat, Endian, System, TypeTraits, Optional, ContainerHash, IO and Asio. These are only tested under Clang yet - it’s part of a discovery process. The idea is modularizing the flagship libraries to verify that the approach works, and to measure compile time improvements. There is still a lot to do before things become functional. I’ve received helpful feedback from many community members, which has been invaluable. Redis meets Capy If you’re a user of Boost.Asio and coroutines, you probably know that there’s a new player in town - Capy and Corosio. They’re a coroutines-native Asio replacement which promise a range of benefits, from improved expressiveness to saner compile times, without performance loss. Since I maintain Boost.MySQL and co-maintain Boost.Redis, I know the pain of writing operations using the universal Asio model. Lifetime management is difficult to follow, testing is complex, and things must remain header-only (and usually heavily templatized). Coroutine code is much simpler to write and understand, and it’s what I use whenever I can. So obviously I’m interested in this project. My long-term idea is creating a v2 version of MySQL and Redis that exposes a Capy/Corosio interface. As a proof-of-concept, I migrated Boost.Redis and some of its tests. Still some polishing needed, but - it works! You can read the full report on the Boost mailing list. Some sample code as an appetizer: capy::task&amp;lt;void&amp;gt; run_request(connection&amp;amp; conn) { // A request containing only a ping command. request req; req.push(&quot;PING&quot;, &quot;Hello world&quot;); // Response where the PONG response will be stored. response&amp;lt;std::string&amp;gt; resp; // Executes the request. auto [ec] = co_await conn.exec(req, resp); if (ec) co_return; std::cout &amp;lt;&amp;lt; &quot;PING value: &quot; &amp;lt;&amp;lt; std::get&amp;lt;0&amp;gt;(resp).value() &amp;lt;&amp;lt; std::endl; } capy::task&amp;lt;void&amp;gt; co_main() { connection conn{(co_await capy::this_coro::executor).context()}; co_await capy::when_any( // Sends the request run_request(conn), // Performs connection establishment, re-connection, pings... conn.run(config{}) ); } Redis PubSub improvements Working with PubSub messages in Boost.Redis has always been more involved than in other libraries. For example, we support transparent reconnection, but (before 1.91), the user had to explicitly re-establish subscriptions: request req; req.push(&quot;SUBSCRIBE&quot;, &quot;channel&quot;); while (conn-&amp;gt;will_reconnect()) { // Reconnect to the channels. co_await conn-&amp;gt;async_exec(req, ignore); // ... } Boost 1.91 has added PubSub state restoration. A fancy name but an easy feature: established subscriptions are recorded, and when a reconnection happens, the subscription is re-established automatically: // Subscribe to the channel 'mychannel'. If a re-connection happens, // an appropriate SUBSCRIBE command is issued to re-establish the subscription. request req; req.subscribe({&quot;mychannel&quot;}); co_await conn-&amp;gt;async_exec(req); Boost 1.91 also adds flat_tree, a specialized container for Redis messages with an emphasis on memory-reuse, performance and usability. This container is especially appropriate when dealing with PubSub. We’ve also added connection::async_receive2(), a higher-performance replacement for connection::async_receive() that consumes messages in batches, rather than one-by-one, eliminating re-scheduling overhead. And push_parser, a view to transform raw RESP3 nodes into user-friendly structures. With these improvements, code goes from: // Loop while reconnection is enabled while (conn-&amp;gt;will_reconnect()) { // Reconnect to channels. co_await conn-&amp;gt;async_exec(req, ignore); // Loop reading Redis pushs messages. for (error_code ec;;) { // First try to read any buffered pushes. conn-&amp;gt;receive(ec); if (ec == error::sync_receive_push_failed) { ec = {}; // Wait for pushes co_await conn-&amp;gt;async_receive(asio::redirect_error(asio::use_awaitable, ec)); } if (ec) break; // Connection lost, break so we can reconnect to channels. // Left to the user: resp contains raw RESP3 nodes, which need to be parsed manually! // Remove the nodes corresponding to one message consume_one(resp); } } To: // Loop to read Redis push messages. while (conn-&amp;gt;will_reconnect()) { // No need to reconnect, we now have PubSub state restoration // Wait for pushes auto [ec] = co_await conn-&amp;gt;async_receive2(asio::as_tuple); if (ec) break; // Cancelled // Consume the messages for (push_view elem : push_parser(resp.value())) std::cout &amp;lt;&amp;lt; &quot;Received message from channel &quot; &amp;lt;&amp;lt; elem.channel &amp;lt;&amp;lt; &quot;: &quot; &amp;lt;&amp;lt; elem.payload &amp;lt;&amp;lt; &quot;\n&quot;; // Clear all the batch resp.value().clear(); }</summary></entry><entry><title type="html">Hubs, intervals and math</title><link href="http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update.html" rel="alternate" type="text/html" title="Hubs, intervals and math" /><published>2026-04-02T00:00:00+00:00</published><updated>2026-04-02T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update.html">&lt;p&gt;During Q1 2026, I’ve been working in the following areas:&lt;/p&gt;

&lt;h3 id=&quot;boostcontainerhub&quot;&gt;&lt;code&gt;boost::container::hub&lt;/code&gt;&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/joaquintides/hub&quot;&gt;&lt;code&gt;boost::container::hub&lt;/code&gt;&lt;/a&gt; is a nearly drop-in replacement of
C++26 &lt;a href=&quot;https://eel.is/c++draft/sequences#hive&quot;&gt;&lt;code&gt;std::hive&lt;/code&gt;&lt;/a&gt; sporting a simpler data structure and
providing competitive performance with respect to the de facto reference implementation
&lt;a href=&quot;https://github.com/mattreecebentley/plf_hive&quot;&gt;&lt;code&gt;plf::hive&lt;/code&gt;&lt;/a&gt;. When I first read about &lt;code&gt;std::hive&lt;/code&gt;,
I couldn’t help thinking how complex the internal design of the container is, and wondered
if something leaner could in fact be more effective. &lt;code&gt;boost::container::hub&lt;/code&gt; critically relies
on two realizations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Identification of empty slots by way of &lt;a href=&quot;https://en.cppreference.com/w/cpp/numeric/countr_zero.html&quot;&gt;&lt;code&gt;std::countr_zero&lt;/code&gt;&lt;/a&gt;
operations on a bitmask is extremely fast.&lt;/li&gt;
  &lt;li&gt;Modern allocators are very fast, too: &lt;code&gt;boost::container::hub&lt;/code&gt; does many more allocations
than &lt;code&gt;plf::hive&lt;/code&gt;, but this doesn’t degrade its performance significantly (although it affects
cache locality).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;boost::container::hub&lt;/code&gt; is formally proposed for inclusion in Boost.Container and will be
officially reviewed April 16-26. Ion Gaztañaga serves as the review manager.&lt;/p&gt;

&lt;h3 id=&quot;using-stdcpp-2026&quot;&gt;using std::cpp 2026&lt;/h3&gt;

&lt;p&gt;I gave my talk &lt;a href=&quot;https://github.com/joaquintides/usingstdcpp2026&quot;&gt;“The Mathematical Mind of a C++ Programmer”&lt;/a&gt;
at the &lt;a href=&quot;https://eventos.uc3m.es/141471/detail/using-std-cpp-2026.html&quot;&gt;using std::cpp 2026&lt;/a&gt; conference
taking place in Madrid during March 16-19. I had a lot of fun preparing the presentation and
delivering the actual talk, and some interesting discussions  were had around it.
This is a subject I’ve been wanting to talk about for decades, so I’m somewhat relieved I finally
got it over with this year. Always happy to discuss C++ and math, so if you have feedback
or want to continue the conversation, please reach out to me.&lt;/p&gt;

&lt;h3 id=&quot;boostunordered&quot;&gt;Boost.Unordered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written maintenance fixes
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/328&quot;&gt;PR#328&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/335&quot;&gt;PR#335&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/336&quot;&gt;PR#336&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/337&quot;&gt;PR#337&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/339&quot;&gt;PR#339&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/344&quot;&gt;PR#344&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/345&quot;&gt;PR#345&lt;/a&gt;. Some of these fixes are related
to Node.js vulnerabilities in the Antora setup used for doc building: as the number
of Boost libraries using Antora is bound to grow, maybe we should think of an automated
way to get these vulnerabilities automatically fixed for the whole project.&lt;/li&gt;
  &lt;li&gt;Reviewed and merged
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/317&quot;&gt;PR#317&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/332&quot;&gt;PR#332&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/334&quot;&gt;PR#334&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/341&quot;&gt;PR#341&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/342&quot;&gt;PR#342&lt;/a&gt;. Many thanks to
Sam Darwin, Braden Ganetsky and Andrey Semashev for their contributions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostbimap&quot;&gt;Boost.Bimap&lt;/h3&gt;

&lt;p&gt;Merged
&lt;a href=&quot;https://github.com/boostorg/bimap/pull/31&quot;&gt;PR#31&lt;/a&gt; (&lt;code&gt;std::initializer_list&lt;/code&gt;
constructor) and provided testing and documentation for this new
feature (&lt;a href=&quot;https://github.com/boostorg/bimap/pull/54&quot;&gt;PR#54&lt;/a&gt;). The original
PR was silently sitting on the queue for more than four years and it
was only when it was brought to my attention in a Reddit conversation that
I got to take a look at it. Boost.Bimap needs an active mantainer,
I guess I could become this person.&lt;/p&gt;

&lt;h3 id=&quot;boosticl&quot;&gt;Boost.ICL&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/llvm/llvm-project/pull/161366&quot;&gt;Recent changes&lt;/a&gt; in libc++ v22
code for associative container lookup have resulted in the 
&lt;a href=&quot;https://github.com/boostorg/icl/issues/51&quot;&gt;breakage of Boost.ICL&lt;/a&gt;. 
My understanding is that the changes in libc++ are not
standards conformant, and there’s an &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/187667&quot;&gt;ongoing discussion&lt;/a&gt;
on that; in the meantime, I wrote and proposed a &lt;a href=&quot;https://github.com/boostorg/icl/pull/54&quot;&gt;PR&lt;/a&gt;
to Boost.ICL that fixes the problem (pending acceptance).&lt;/p&gt;

&lt;h3 id=&quot;support-to-the-community&quot;&gt;Support to the community&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;I’ve been helping a bit with Mark Cooper’s very successful
&lt;a href=&quot;https://x.com/search?q=%22Boost%20Blueprint%22&amp;amp;src=typed_query&amp;amp;f=live&quot;&gt;Boost Blueprint&lt;/a&gt;
series on X.&lt;/li&gt;
  &lt;li&gt;Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="joaquin" /><summary type="html">During Q1 2026, I’ve been working in the following areas: boost::container::hub boost::container::hub is a nearly drop-in replacement of C++26 std::hive sporting a simpler data structure and providing competitive performance with respect to the de facto reference implementation plf::hive. When I first read about std::hive, I couldn’t help thinking how complex the internal design of the container is, and wondered if something leaner could in fact be more effective. boost::container::hub critically relies on two realizations: Identification of empty slots by way of std::countr_zero operations on a bitmask is extremely fast. Modern allocators are very fast, too: boost::container::hub does many more allocations than plf::hive, but this doesn’t degrade its performance significantly (although it affects cache locality). boost::container::hub is formally proposed for inclusion in Boost.Container and will be officially reviewed April 16-26. Ion Gaztañaga serves as the review manager. using std::cpp 2026 I gave my talk “The Mathematical Mind of a C++ Programmer” at the using std::cpp 2026 conference taking place in Madrid during March 16-19. I had a lot of fun preparing the presentation and delivering the actual talk, and some interesting discussions were had around it. This is a subject I’ve been wanting to talk about for decades, so I’m somewhat relieved I finally got it over with this year. Always happy to discuss C++ and math, so if you have feedback or want to continue the conversation, please reach out to me. Boost.Unordered Written maintenance fixes PR#328, PR#335, PR#336, PR#337, PR#339, PR#344, PR#345. Some of these fixes are related to Node.js vulnerabilities in the Antora setup used for doc building: as the number of Boost libraries using Antora is bound to grow, maybe we should think of an automated way to get these vulnerabilities automatically fixed for the whole project. Reviewed and merged PR#317, PR#332, PR#334, PR#341, PR#342. Many thanks to Sam Darwin, Braden Ganetsky and Andrey Semashev for their contributions. Boost.Bimap Merged PR#31 (std::initializer_list constructor) and provided testing and documentation for this new feature (PR#54). The original PR was silently sitting on the queue for more than four years and it was only when it was brought to my attention in a Reddit conversation that I got to take a look at it. Boost.Bimap needs an active mantainer, I guess I could become this person. Boost.ICL Recent changes in libc++ v22 code for associative container lookup have resulted in the breakage of Boost.ICL. My understanding is that the changes in libc++ are not standards conformant, and there’s an ongoing discussion on that; in the meantime, I wrote and proposed a PR to Boost.ICL that fixes the problem (pending acceptance). Support to the community I’ve been helping a bit with Mark Cooper’s very successful Boost Blueprint series on X. Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</summary></entry><entry><title type="html">Systems, CI Updates Q1 2026</title><link href="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q1 2026" /><published>2026-03-31T00:00:00+00:00</published><updated>2026-03-31T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/03/31/SamsQ1Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html">&lt;h3 id=&quot;code-coverage-reports---designing-new-gcovr-templates&quot;&gt;Code Coverage Reports - designing new GCOVR templates&lt;/h3&gt;

&lt;p&gt;A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: &lt;a href=&quot;https://github.com/boostorg/boost-ci/blob/master/docs/code-coverage.md&quot;&gt;Code Coverage with Github Actions and Github Pages&lt;/a&gt;. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Better full page scrolling of C++ source code files&lt;/li&gt;
  &lt;li&gt;Include ‘functions’ listings on every page&lt;/li&gt;
  &lt;li&gt;Optionally disable branch coverage&lt;/li&gt;
  &lt;li&gt;Purposely restrict coverage directories to src/ and include/&lt;/li&gt;
  &lt;li&gt;Another scrolling bug fixed&lt;/li&gt;
  &lt;li&gt;Both blue and green colored themes&lt;/li&gt;
  &lt;li&gt;Codacy linting&lt;/li&gt;
  &lt;li&gt;New forward and back buttons. Allows navigation to each “miss” and subsequent pages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;server-hosting&quot;&gt;Server Hosting&lt;/h3&gt;

&lt;p&gt;This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://lists.boost.org/archives/list/boost@lists.boost.org/thread/XYFD42TTQRYHWTLGP6GCIZQ6NHCZLNQT/&quot;&gt;Farewell to Wowbagger - End of an Era for boost.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check.&lt;/p&gt;

&lt;p&gt;Launched server instances for corosio.org and paperflow.&lt;/p&gt;

&lt;h3 id=&quot;fil-c&quot;&gt;Fil-C&lt;/h3&gt;

&lt;p&gt;Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ .
Built a Fil-C container image based on Drone images.
Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building.  This is not quite finished but should be online soon.&lt;/p&gt;

&lt;h3 id=&quot;boost-release-process-boostorgrelease-tools&quot;&gt;Boost release process boostorg/release-tools&lt;/h3&gt;

&lt;p&gt;The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large.
And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that 
go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes.&lt;/p&gt;

&lt;p&gt;Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx.&lt;/p&gt;

&lt;h3 id=&quot;doc-previews-and-doc-builds&quot;&gt;Doc Previews and Doc Builds&lt;/h3&gt;

&lt;p&gt;Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path.&lt;/p&gt;

&lt;p&gt;Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download.&lt;/p&gt;

&lt;p&gt;Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily.&lt;/p&gt;

&lt;p&gt;CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip&lt;/p&gt;

&lt;h3 id=&quot;boost-website-boostorgwebsite-v2&quot;&gt;Boost website boostorg/website-v2&lt;/h3&gt;

&lt;p&gt;Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website.&lt;/p&gt;

&lt;p&gt;Disable Fastly caching to assist metalab developers.&lt;/p&gt;

&lt;p&gt;Gitflow workflow planning meetings.&lt;/p&gt;

&lt;p&gt;Discussions about how Tools should be present on the libraries pages.&lt;/p&gt;

&lt;p&gt;On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless.&lt;/p&gt;

&lt;p&gt;Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested.&lt;/p&gt;

&lt;p&gt;Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo.&lt;/p&gt;

&lt;h3 id=&quot;mailman3&quot;&gt;Mailman3&lt;/h3&gt;

&lt;p&gt;Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done.&lt;/p&gt;

&lt;h3 id=&quot;boostorg&quot;&gt;boostorg&lt;/h3&gt;

&lt;p&gt;Migrated cppalliance/decimal to boostorg/decimal.&lt;/p&gt;

&lt;h3 id=&quot;jenkins&quot;&gt;Jenkins&lt;/h3&gt;

&lt;p&gt;The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced.&lt;/p&gt;

&lt;p&gt;Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs.&lt;/p&gt;

&lt;p&gt;Debugging Jenkins runners, set subnet and zone on the cloud server configurations.&lt;/p&gt;

&lt;p&gt;Fixed lcov jobs, that need cxxstd=20&lt;/p&gt;

&lt;p&gt;Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts.&lt;/p&gt;

&lt;p&gt;Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews.&lt;/p&gt;

&lt;p&gt;Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr]&lt;/p&gt;

&lt;p&gt;Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible.&lt;/p&gt;

&lt;h3 id=&quot;gha&quot;&gt;GHA&lt;/h3&gt;

&lt;p&gt;Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years.&lt;/p&gt;

&lt;p&gt;Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3.&lt;/p&gt;

&lt;h3 id=&quot;drone&quot;&gt;Drone&lt;/h3&gt;

&lt;p&gt;Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.&lt;/p&gt;</content><author><name></name></author><category term="sam" /><summary type="html">Code Coverage Reports - designing new GCOVR templates A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: Code Coverage with Github Actions and Github Pages. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development. Better full page scrolling of C++ source code files Include ‘functions’ listings on every page Optionally disable branch coverage Purposely restrict coverage directories to src/ and include/ Another scrolling bug fixed Both blue and green colored themes Codacy linting New forward and back buttons. Allows navigation to each “miss” and subsequent pages Server Hosting This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement: Farewell to Wowbagger - End of an Era for boost.org There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check. Launched server instances for corosio.org and paperflow. Fil-C Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ . Built a Fil-C container image based on Drone images. Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building. This is not quite finished but should be online soon. Boost release process boostorg/release-tools The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large. And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes. Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx. Doc Previews and Doc Builds Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path. Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download. Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily. CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip Boost website boostorg/website-v2 Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website. Disable Fastly caching to assist metalab developers. Gitflow workflow planning meetings. Discussions about how Tools should be present on the libraries pages. On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless. Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested. Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo. Mailman3 Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done. boostorg Migrated cppalliance/decimal to boostorg/decimal. Jenkins The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced. Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs. Debugging Jenkins runners, set subnet and zone on the cloud server configurations. Fixed lcov jobs, that need cxxstd=20 Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts. Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews. Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr] Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible. GHA Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years. Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3. Drone Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.</summary></entry><entry><title type="html">Statement from the C++ Alliance on WG21 Committee Meeting Support</title><link href="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html" rel="alternate" type="text/html" title="Statement from the C++ Alliance on WG21 Committee Meeting Support" /><published>2026-03-27T00:00:00+00:00</published><updated>2026-03-27T00:00:00+00:00</updated><id>http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement</id><content type="html" xml:base="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html">&lt;p&gt;The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible.&lt;/p&gt;

&lt;p&gt;We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee.&lt;/p&gt;

&lt;p&gt;The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program.&lt;/p&gt;

&lt;p&gt;If you are interested in learning more about our attendance program, please reach out to us at &lt;a href=&quot;mailto:info@cppalliance.org&quot;&gt;info@cppalliance.org&lt;/a&gt;.&lt;/p&gt;</content><author><name></name></author><category term="company" /><summary type="html">The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible. We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee. The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program. If you are interested in learning more about our attendance program, please reach out to us at info@cppalliance.org.</summary></entry><entry><title type="html">Corosio Beta: Coroutine-Native Networking for C++20</title><link href="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html" rel="alternate" type="text/html" title="Corosio Beta: Coroutine-Native Networking for C++20" /><published>2026-03-11T00:00:00+00:00</published><updated>2026-03-11T00:00:00+00:00</updated><id>http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking</id><content type="html" xml:base="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html">&lt;h1 id=&quot;corosio-beta-coroutine-native-networking-for-c20&quot;&gt;Corosio Beta: Coroutine-Native Networking for C++20&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review.&lt;/em&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;the-gap-c20-left-open&quot;&gt;The Gap C++20 Left Open&lt;/h2&gt;

&lt;p&gt;C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;what-corosio-is&quot;&gt;What Corosio Is&lt;/h2&gt;

&lt;p&gt;Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write &lt;code&gt;co_await&lt;/code&gt; and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-c&quot;&gt;auto [socket] = co_await acceptor.async_accept();
auto n = co_await socket.async_read_some(buffer);
co_await socket.async_write(response);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;built-on-capy&quot;&gt;Built on Capy&lt;/h2&gt;

&lt;p&gt;Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: &lt;em&gt;an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Capy’s &lt;em&gt;IoAwaitable&lt;/em&gt; protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;what-we-are-asking-for&quot;&gt;What We Are Asking For&lt;/h2&gt;

&lt;p&gt;We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Does the executor affinity model hold up under production conditions?&lt;/li&gt;
  &lt;li&gt;Does cancellation behave correctly across complex coroutine chains?&lt;/li&gt;
  &lt;li&gt;Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends?&lt;/li&gt;
  &lt;li&gt;Does the zero-allocation model hold in your deployment scenarios?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;get-it&quot;&gt;Get It&lt;/h2&gt;

&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git clone https://github.com/cppalliance/corosio.git
cd corosio
cmake -S . -B build -G Ninja
cmake --build build

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Or with CMake FetchContent:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;include(FetchContent)
FetchContent_Declare(corosio
  GIT_REPOSITORY https://github.com/cppalliance/corosio.git
  GIT_TAG        develop
  GIT_SHALLOW    TRUE)
FetchContent_MakeAvailable(corosio)
target_link_libraries(my_app Boost::corosio)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt; CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;resources&quot;&gt;Resources&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/corosio&quot;&gt;Corosio on GitHub&lt;/a&gt; – https://github.com/cppalliance/corosio&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://master.corosio.cpp.al/&quot;&gt;Corosio Docs&lt;/a&gt; – https://develop.corosio.cpp.al/&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/capy&quot;&gt;Capy on GitHub&lt;/a&gt; – https://github.com/cppalliance/capy&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://master.capy.cpp.al/&quot;&gt;Capy Docs&lt;/a&gt; – https://develop.capy.cpp.al/&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/corosio/issues&quot;&gt;File an Issue&lt;/a&gt; – https://github.com/cppalliance/corosio/issues&lt;/p&gt;</content><author><name></name></author><category term="mark" /><summary type="html">Corosio Beta: Coroutine-Native Networking for C++20 The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review. The Gap C++20 Left Open C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over. What Corosio Is Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write co_await and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver. auto [socket] = co_await acceptor.async_accept(); auto n = co_await socket.async_read_some(buffer); co_await socket.async_write(response); Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake. Built on Capy Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained. Capy’s IoAwaitable protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup. What We Are Asking For We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically: Does the executor affinity model hold up under production conditions? Does cancellation behave correctly across complex coroutine chains? Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends? Does the zero-allocation model hold in your deployment scenarios? We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny. Get It git clone https://github.com/cppalliance/corosio.git cd corosio cmake -S . -B build -G Ninja cmake --build build Or with CMake FetchContent: include(FetchContent) FetchContent_Declare(corosio GIT_REPOSITORY https://github.com/cppalliance/corosio.git GIT_TAG develop GIT_SHALLOW TRUE) FetchContent_MakeAvailable(corosio) target_link_libraries(my_app Boost::corosio) Requires: CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+ Resources Corosio on GitHub – https://github.com/cppalliance/corosio Corosio Docs – https://develop.corosio.cpp.al/ Capy on GitHub – https://github.com/cppalliance/capy Capy Docs – https://develop.capy.cpp.al/ File an Issue – https://github.com/cppalliance/corosio/issues</summary></entry><entry><title type="html">A postgres library for Boost</title><link href="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html" rel="alternate" type="text/html" title="A postgres library for Boost" /><published>2026-01-23T00:00:00+00:00</published><updated>2026-01-23T00:00:00+00:00</updated><id>http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html">&lt;p&gt;Do you know Boost.MySQL? If you’ve been reading my posts, you probably do.
Many people have wondered ‘why not Postgres?’. Well, the time is now.
TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL.
You can find the code &lt;a href=&quot;https://github.com/anarthal/nativepg&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since libPQ is already a good library, the NativePG project intends
to be more ambitious than Boost.MySQL. In addition to the expected
Asio interface, I intend to provide a sans-io API that exposes primitives
like message serialization.&lt;/p&gt;

&lt;p&gt;Throughout this post, I will go into the intended library design and the rationales
behind its design.&lt;/p&gt;

&lt;h2 id=&quot;the-lowest-level-message-serialization&quot;&gt;The lowest level: message serialization&lt;/h2&gt;

&lt;p&gt;PostgreSQL clients communicate with the server using
a binary protocol on top of TCP, termed &lt;a href=&quot;https://www.postgresql.org/docs/current/protocol.html&quot;&gt;the frontend/backend protocol&lt;/a&gt;.
The protocol defines a set of messages used for interactions. For example, when running a query, the following happens:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;┌────────┐                                    ┌────────┐
│ Client │                                    │ Server │
└───┬────┘                                    └───┬────┘
    │                                             │
    │  Query                                      │
    │ ──────────────────────────────────────────&amp;gt; │
    │                                             │
    │                        RowDescription       │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                              DataRow        │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                        CommandComplete      │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                        ReadyForQuery        │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In the lowest layer, this library provides functions to serialize and parse
such messages. The goal here is being as efficient as possible.
Parsing functions are non-allocating, and use an approach inspired by
Boost.Url collections:&lt;/p&gt;

&lt;h2 id=&quot;parsing-database-types&quot;&gt;Parsing database types&lt;/h2&gt;

&lt;p&gt;The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types,
it supports advanced scalars like UUIDs, arrays and user-defined aggregates.&lt;/p&gt;

&lt;p&gt;When running a query, libPQ exposes retrieved data as either raw text or bytes.
This is what the server sends in the &lt;code&gt;DataRow&lt;/code&gt; packets shown above.
To do something useful with the data, users likely need parsing and serializing
such types.&lt;/p&gt;

&lt;p&gt;The next layer of NativePG is in charge of providing such functions.
This will likely contain some extension points for users to plug in their types.
This is the general form of such functions:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;system::error_code parse(span&amp;lt;const std::byte&amp;gt; from, T&amp;amp; to, const connection_state&amp;amp;);
void serialize(const T&amp;amp; from, dynamic_buffer&amp;amp; to, const connection_state&amp;amp;);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that some types might require access to session configuration.
For instance, dates may be expressed using different wire formats depending
on the connection’s runtime settings.&lt;/p&gt;

&lt;p&gt;At the time of writing, only ints and strings are supported,
but this will be extended soon.&lt;/p&gt;

&lt;h2 id=&quot;composing-requests&quot;&gt;Composing requests&lt;/h2&gt;

&lt;p&gt;Efficiency in database communication is achieved with pipelining.
A network round-trip with the server is worth a thousand allocations in the client.
It is thus critical that:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The protocol properly supports pipelining. This is the case with PostgreSQL.&lt;/li&gt;
  &lt;li&gt;The client should expose an interface to it, and make it very easy to use.
libPQ does the first, and NativePG intends to achieve the second.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NativePG pipelines by default. In NativePG, a &lt;code&gt;request&lt;/code&gt; object is always
a pipeline:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Create a request
request req;

// These two queries will be executed as part of a pipeline
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
req.add_query(&quot;DELETE FROM libs WHERE author &amp;lt;&amp;gt; $1&quot;, {&quot;Ruben&quot;});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Everything you may ask the server can be added to &lt;code&gt;request&lt;/code&gt;.
This includes preparing and executing statements, establishing
pipeline synchronization points, and so on.
It aims to be close enough to the protocol to be powerful,
while also exposing high-level functions to make things easier.&lt;/p&gt;

&lt;h2 id=&quot;reading-responses&quot;&gt;Reading responses&lt;/h2&gt;

&lt;p&gt;Like &lt;code&gt;request&lt;/code&gt;, the core response mechanism aims to be as close
to the protocol as possible. Since use cases here are much more varied,
there is no single &lt;code&gt;response&lt;/code&gt; class, but a concept, instead.
This is what a &lt;code&gt;response_handler&lt;/code&gt; looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;
struct my_handler {
    // Check that the handler is compatible with the request,
    // and prepare any required data structures. Called once at the beginning
    handler_setup_result setup(const request&amp;amp; req, std::size_t pipeline_offset);

    // Called once for every message received from the server
    // (e.g. `RowDescription`, `DataRow`, `CommandComplete`)
    void on_message(const any_request_message&amp;amp; msg);

    // The overall result of the operation (error_code + diagnostic string).
    // Called after the operation has finished.
    const extended_error&amp;amp; result() const;
};

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that &lt;code&gt;on_message&lt;/code&gt; is not allowed to report errors.
Even if a handler encounters a problem with a message
(imagine finding a &lt;code&gt;NULL&lt;/code&gt; for a field where the user isn’t expecting one),
this is a user error, rather than a protocol error.
Subsequent steps in the pipeline must not be affected by this.&lt;/p&gt;

&lt;p&gt;This is powerful but very low-level. Using this mechanism, the library
exposes an interface to parse the result of a query into a user-supplied
struct, using Boost.Describe:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct library
{
    std::int32_t id;
    std::string name;
    std::string cpp_version;
};
BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version))

// ...
std::vector&amp;lt;library&amp;gt; libs;
auto handler = nativepg::into(libs); // this is a valid response_handler
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;network-algorithms&quot;&gt;Network algorithms&lt;/h2&gt;

&lt;p&gt;Given a user request and response handler, how do we send these to the server?
We need a set of network algorithms to achieve this. Some of these are trivial:
sending a request to the server is an &lt;code&gt;asio::write&lt;/code&gt; on the request’s buffer.
Others, however, are more involved:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Reading a pipeline response needs to verify that the message
sequence is what we expected, for security, and handle errors gracefully.&lt;/li&gt;
  &lt;li&gt;The handshake algorithm, in charge of authentication when we connect to the
server, needs to respond to server authentication challenges, which may
come in different forms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Writing these using &lt;code&gt;asio::async_compose&lt;/code&gt; is problematic because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They become tied to Boost.Asio.&lt;/li&gt;
  &lt;li&gt;They are difficult to test.&lt;/li&gt;
  &lt;li&gt;They result in long compile times and code bloat due to templating.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the moment, these are written as finite state machines, similar to
how OpenSSL behaves in non-blocking mode:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Reads the response of a pipeline (simplified).
// This is a hand-wired generator.
class read_response_fsm {
public:
    // User-supplied arguments: request and response
    read_response_fsm(const request&amp;amp; req, response_handler_ref handler);

    // Yielded to signal that we should read from the server
    struct read_args { span&amp;lt;std::byte&amp;gt; buffer; };

    // Yielded to signal that we're done
    struct done_args { system::error_code result; };

    variant&amp;lt;read_args, done_args&amp;gt;
    resume(connection_state&amp;amp;, system::error_code io_result, std::size_t bytes_transferred);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The idea is that higher-level code should call &lt;code&gt;resume&lt;/code&gt; until it returns
a &lt;code&gt;done_args&lt;/code&gt; value. This allows de-coupling from the underlying I/O runtime.&lt;/p&gt;

&lt;p&gt;Since NativePG targets C++20, I’m considering rewriting this as a coroutine.
Boost.Capy (currently under development - hopefully part of Boost soon)
could be a good candidate for this.&lt;/p&gt;

&lt;h2 id=&quot;putting-everything-together-the-asio-interface&quot;&gt;Putting everything together: the Asio interface&lt;/h2&gt;

&lt;p&gt;At the end of the day, most users just want a &lt;code&gt;connection&lt;/code&gt; object they can easily
use. Once all the sans-io parts are working, writing it is pretty straight-forward.
This is what end user code looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Create a connection
connection conn{co_await asio::this_coro::executor};

// Connect
co_await conn.async_connect(
    {.hostname = &quot;localhost&quot;, .username = &quot;postgres&quot;, .password = &quot;&quot;, .database = &quot;postgres&quot;}
);
std::cout &amp;lt;&amp;lt; &quot;Startup complete\n&quot;;

// Compose our request and response
request req;
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
std::vector&amp;lt;library&amp;gt; libs;

// Run the request
co_await conn.async_exec(req, into(libs));
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;auto-batch-connections&quot;&gt;Auto-batch connections&lt;/h2&gt;

&lt;p&gt;While &lt;code&gt;connection&lt;/code&gt; is good, experience has shown me that it’s still
too low-level for most users:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Connection establishment is manual with &lt;code&gt;async_connect&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;No built-in reconnection or health checks.&lt;/li&gt;
  &lt;li&gt;No built-in concurrent execution of requests.
That is, &lt;code&gt;async_exec&lt;/code&gt; first writes the request, then reads the response.
Other requests may not be executed during this period.
This limits the connection’s throughput.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this reason, NativePG will provide some higher-level interfaces
that will make server communication easier and more efficient.
To get a feel of what we need, we should first understand
the two main usage patterns that we expect.&lt;/p&gt;

&lt;p&gt;Most of the time, connections are used in a &lt;strong&gt;stateless&lt;/strong&gt; way.
For example, consider querying data from the server:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
co_await conn.async_exec(req, res);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This query is not mutating connection state in any way.
Other queries could be inserted before and after it without
making any difference.&lt;/p&gt;

&lt;p&gt;I plan to add a higher-level connection type, similar to
&lt;code&gt;redis::connection&lt;/code&gt; in Boost.Redis, that automatically
batches concurrent requests and handles reconnection.
The key differences with &lt;code&gt;connection&lt;/code&gt; would be:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Several independent tasks can share an auto-batch connection.
This is an error for &lt;code&gt;connection&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;If several requests are queued at the same time,
the connection may send them together to the server using a single system call.&lt;/li&gt;
  &lt;li&gt;There is no &lt;code&gt;async_connect&lt;/code&gt; in an auto-batch connection.
Reconnection is handled automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that this pattern is not exclusive to read-only or
individual queries. Transactions can work by using protocol features:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.set_autosync(false); // All subsequent queries are part of the same transaction
req.add_query(&quot;UPDATE table1 SET x = $1 WHERE y = 2&quot;, {42});
req.add_query(&quot;UPDATE table2 SET x = $1 WHERE y = 42&quot;, {2});
req.add_sync(); // The two updates run atomically
co_await conn.async_exec(req, res);
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;connection-pools&quot;&gt;Connection pools&lt;/h2&gt;

&lt;p&gt;I mentioned there were two main usage scenarios in the library.
Sometimes, it is required to use connections in a &lt;strong&gt;stateful&lt;/strong&gt; way:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.add_simple_query(&quot;BEGIN&quot;); // start a transaction manually
req.add_query(&quot;SELECT * FROM library WHERE author = $1 FOR UPDATE&quot;, {&quot;Ruben&quot;}); // lock rows
co_await conn.async_exec(req, lib);

// Do something in the client that depends on lib
if (lib.id == &quot;Boost.MySQL&quot;)
    co_return; // don't

// Now compose another request that depends on what we read from lib
req.clear();
req.add_query(&quot;UPDATE library SET status = 'deprecated' WHERE id = $1&quot;, {lib.id});
req.add_simple_query(&quot;COMMIT&quot;);
co_await conn.async_exec(req, ignore);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The key point here is that this pattern requires exclusive access to &lt;code&gt;conn&lt;/code&gt;.
No other requests should be interleaved between the first and the second
&lt;code&gt;async_exec&lt;/code&gt; invocations.&lt;/p&gt;

&lt;p&gt;The best way to solve this is by using a connection pool.
This is what client code could look like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;co_await pool.async_exec([&amp;amp;] (connection&amp;amp; conn) -&amp;gt; asio::awaitable&amp;lt;system::error_code&amp;gt; {
    request req;
    req.add_simple_query(&quot;BEGIN&quot;);
    req.add_query(&quot;SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE&quot;, {user_id});

    account_info acc;
    co_await conn.async_exec(req, into(acc));

    // Check if account has sufficient funds and is active
    if (acc.balance &amp;lt; payment_amount || acc.status != &quot;active&quot;)
        co_return error::insufficient_funds;

    // Call external payment gateway API - this CANNOT be done in SQL
    auto result = co_await payment_gateway.process_charge(user_id, payment_amount);

    // Compose next request based on the external API response
    req.clear();
    if (result.success) {
        req.add_query(
            &quot;UPDATE accounts SET balance = balance - $1 WHERE user_id = $2&quot;,
            {payment_amount, user_id}
        );
        req.add_simple_query(&quot;COMMIT&quot;);
    }
    co_await conn.async_exec(req, ignore);

    // The connection is automatically returned to the pool when this coroutine completes
    co_return result.success ? error_code{} : error::payment_failed;
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I explicitly want to avoid having a &lt;code&gt;connection_pool::async_get_connection()&lt;/code&gt;
function, like in Boost.MySQL. This function returns a proxy object that grants access
to a free connection. When destroyed, the connection is returned to the pool.
This pattern looks great on paper, but runs into severe complications in
multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state,
thus needing at least an &lt;code&gt;asio::dispatch&lt;/code&gt; to the pool’s executor, which may or may not
be a strand. It is so easy to get wrong that Boost.MySQL added a &lt;code&gt;pool_params::thread_safe&lt;/code&gt; boolean
option to take care of this automatically, adding extra complexity. Definitely something to avoid.&lt;/p&gt;

&lt;h2 id=&quot;sql-formatting&quot;&gt;SQL formatting&lt;/h2&gt;

&lt;p&gt;As we’ve seen, the protocol has built-in support for adding
parameters to queries (see placeholders like &lt;code&gt;$1&lt;/code&gt;). These placeholders
are expanded in the server securely.&lt;/p&gt;

&lt;p&gt;While this covers most cases, sometimes we need to generate SQL
that is too dynamic to be handled by the server. For instance,
a website might allow multiple optional filters, translating into
&lt;code&gt;WHERE&lt;/code&gt; clauses that might or might not be present.&lt;/p&gt;

&lt;p&gt;These use cases require SQL generated in the client. To do so,
we need a way of formatting user-supplied values without
running into SQL injection vulnerabilities. The final piece
of the library becomes a &lt;code&gt;format_sql&lt;/code&gt; function akin to the
one in Boost.MySQL.&lt;/p&gt;

&lt;h2 id=&quot;final-thoughts&quot;&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;While the plan is clear, there is still much to be done here.
There are dedicated APIs for high-throughput data copying and
push notifications that need to be implemented. Some of the described
APIs have a solid working implementation, while others still need
some work. All in all, I hope that this library can soon reach a state
where it can be useful to people.&lt;/p&gt;</content><author><name></name></author><category term="ruben" /><summary type="html">Do you know Boost.MySQL? If you’ve been reading my posts, you probably do. Many people have wondered ‘why not Postgres?’. Well, the time is now. TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL. You can find the code here. Since libPQ is already a good library, the NativePG project intends to be more ambitious than Boost.MySQL. In addition to the expected Asio interface, I intend to provide a sans-io API that exposes primitives like message serialization. Throughout this post, I will go into the intended library design and the rationales behind its design. The lowest level: message serialization PostgreSQL clients communicate with the server using a binary protocol on top of TCP, termed the frontend/backend protocol. The protocol defines a set of messages used for interactions. For example, when running a query, the following happens: ┌────────┐ ┌────────┐ │ Client │ │ Server │ └───┬────┘ └───┬────┘ │ │ │ Query │ │ ──────────────────────────────────────────&amp;gt; │ │ │ │ RowDescription │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ DataRow │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ CommandComplete │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ ReadyForQuery │ │ &amp;lt;────────────────────────────────────────── │ │ │ In the lowest layer, this library provides functions to serialize and parse such messages. The goal here is being as efficient as possible. Parsing functions are non-allocating, and use an approach inspired by Boost.Url collections: Parsing database types The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types, it supports advanced scalars like UUIDs, arrays and user-defined aggregates. When running a query, libPQ exposes retrieved data as either raw text or bytes. This is what the server sends in the DataRow packets shown above. To do something useful with the data, users likely need parsing and serializing such types. The next layer of NativePG is in charge of providing such functions. This will likely contain some extension points for users to plug in their types. This is the general form of such functions: system::error_code parse(span&amp;lt;const std::byte&amp;gt; from, T&amp;amp; to, const connection_state&amp;amp;); void serialize(const T&amp;amp; from, dynamic_buffer&amp;amp; to, const connection_state&amp;amp;); Note that some types might require access to session configuration. For instance, dates may be expressed using different wire formats depending on the connection’s runtime settings. At the time of writing, only ints and strings are supported, but this will be extended soon. Composing requests Efficiency in database communication is achieved with pipelining. A network round-trip with the server is worth a thousand allocations in the client. It is thus critical that: The protocol properly supports pipelining. This is the case with PostgreSQL. The client should expose an interface to it, and make it very easy to use. libPQ does the first, and NativePG intends to achieve the second. NativePG pipelines by default. In NativePG, a request object is always a pipeline: // Create a request request req; // These two queries will be executed as part of a pipeline req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); req.add_query(&quot;DELETE FROM libs WHERE author &amp;lt;&amp;gt; $1&quot;, {&quot;Ruben&quot;}); Everything you may ask the server can be added to request. This includes preparing and executing statements, establishing pipeline synchronization points, and so on. It aims to be close enough to the protocol to be powerful, while also exposing high-level functions to make things easier. Reading responses Like request, the core response mechanism aims to be as close to the protocol as possible. Since use cases here are much more varied, there is no single response class, but a concept, instead. This is what a response_handler looks like: struct my_handler { // Check that the handler is compatible with the request, // and prepare any required data structures. Called once at the beginning handler_setup_result setup(const request&amp;amp; req, std::size_t pipeline_offset); // Called once for every message received from the server // (e.g. `RowDescription`, `DataRow`, `CommandComplete`) void on_message(const any_request_message&amp;amp; msg); // The overall result of the operation (error_code + diagnostic string). // Called after the operation has finished. const extended_error&amp;amp; result() const; }; Note that on_message is not allowed to report errors. Even if a handler encounters a problem with a message (imagine finding a NULL for a field where the user isn’t expecting one), this is a user error, rather than a protocol error. Subsequent steps in the pipeline must not be affected by this. This is powerful but very low-level. Using this mechanism, the library exposes an interface to parse the result of a query into a user-supplied struct, using Boost.Describe: struct library { std::int32_t id; std::string name; std::string cpp_version; }; BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version)) // ... std::vector&amp;lt;library&amp;gt; libs; auto handler = nativepg::into(libs); // this is a valid response_handler Network algorithms Given a user request and response handler, how do we send these to the server? We need a set of network algorithms to achieve this. Some of these are trivial: sending a request to the server is an asio::write on the request’s buffer. Others, however, are more involved: Reading a pipeline response needs to verify that the message sequence is what we expected, for security, and handle errors gracefully. The handshake algorithm, in charge of authentication when we connect to the server, needs to respond to server authentication challenges, which may come in different forms. Writing these using asio::async_compose is problematic because: They become tied to Boost.Asio. They are difficult to test. They result in long compile times and code bloat due to templating. At the moment, these are written as finite state machines, similar to how OpenSSL behaves in non-blocking mode: // Reads the response of a pipeline (simplified). // This is a hand-wired generator. class read_response_fsm { public: // User-supplied arguments: request and response read_response_fsm(const request&amp;amp; req, response_handler_ref handler); // Yielded to signal that we should read from the server struct read_args { span&amp;lt;std::byte&amp;gt; buffer; }; // Yielded to signal that we're done struct done_args { system::error_code result; }; variant&amp;lt;read_args, done_args&amp;gt; resume(connection_state&amp;amp;, system::error_code io_result, std::size_t bytes_transferred); }; The idea is that higher-level code should call resume until it returns a done_args value. This allows de-coupling from the underlying I/O runtime. Since NativePG targets C++20, I’m considering rewriting this as a coroutine. Boost.Capy (currently under development - hopefully part of Boost soon) could be a good candidate for this. Putting everything together: the Asio interface At the end of the day, most users just want a connection object they can easily use. Once all the sans-io parts are working, writing it is pretty straight-forward. This is what end user code looks like: // Create a connection connection conn{co_await asio::this_coro::executor}; // Connect co_await conn.async_connect( {.hostname = &quot;localhost&quot;, .username = &quot;postgres&quot;, .password = &quot;&quot;, .database = &quot;postgres&quot;} ); std::cout &amp;lt;&amp;lt; &quot;Startup complete\n&quot;; // Compose our request and response request req; req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); std::vector&amp;lt;library&amp;gt; libs; // Run the request co_await conn.async_exec(req, into(libs)); Auto-batch connections While connection is good, experience has shown me that it’s still too low-level for most users: Connection establishment is manual with async_connect. No built-in reconnection or health checks. No built-in concurrent execution of requests. That is, async_exec first writes the request, then reads the response. Other requests may not be executed during this period. This limits the connection’s throughput. For this reason, NativePG will provide some higher-level interfaces that will make server communication easier and more efficient. To get a feel of what we need, we should first understand the two main usage patterns that we expect. Most of the time, connections are used in a stateless way. For example, consider querying data from the server: request req; req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); co_await conn.async_exec(req, res); This query is not mutating connection state in any way. Other queries could be inserted before and after it without making any difference. I plan to add a higher-level connection type, similar to redis::connection in Boost.Redis, that automatically batches concurrent requests and handles reconnection. The key differences with connection would be: Several independent tasks can share an auto-batch connection. This is an error for connection. If several requests are queued at the same time, the connection may send them together to the server using a single system call. There is no async_connect in an auto-batch connection. Reconnection is handled automatically. Note that this pattern is not exclusive to read-only or individual queries. Transactions can work by using protocol features: request req; req.set_autosync(false); // All subsequent queries are part of the same transaction req.add_query(&quot;UPDATE table1 SET x = $1 WHERE y = 2&quot;, {42}); req.add_query(&quot;UPDATE table2 SET x = $1 WHERE y = 42&quot;, {2}); req.add_sync(); // The two updates run atomically co_await conn.async_exec(req, res); Connection pools I mentioned there were two main usage scenarios in the library. Sometimes, it is required to use connections in a stateful way: request req; req.add_simple_query(&quot;BEGIN&quot;); // start a transaction manually req.add_query(&quot;SELECT * FROM library WHERE author = $1 FOR UPDATE&quot;, {&quot;Ruben&quot;}); // lock rows co_await conn.async_exec(req, lib); // Do something in the client that depends on lib if (lib.id == &quot;Boost.MySQL&quot;) co_return; // don't // Now compose another request that depends on what we read from lib req.clear(); req.add_query(&quot;UPDATE library SET status = 'deprecated' WHERE id = $1&quot;, {lib.id}); req.add_simple_query(&quot;COMMIT&quot;); co_await conn.async_exec(req, ignore); The key point here is that this pattern requires exclusive access to conn. No other requests should be interleaved between the first and the second async_exec invocations. The best way to solve this is by using a connection pool. This is what client code could look like: co_await pool.async_exec([&amp;amp;] (connection&amp;amp; conn) -&amp;gt; asio::awaitable&amp;lt;system::error_code&amp;gt; { request req; req.add_simple_query(&quot;BEGIN&quot;); req.add_query(&quot;SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE&quot;, {user_id}); account_info acc; co_await conn.async_exec(req, into(acc)); // Check if account has sufficient funds and is active if (acc.balance &amp;lt; payment_amount || acc.status != &quot;active&quot;) co_return error::insufficient_funds; // Call external payment gateway API - this CANNOT be done in SQL auto result = co_await payment_gateway.process_charge(user_id, payment_amount); // Compose next request based on the external API response req.clear(); if (result.success) { req.add_query( &quot;UPDATE accounts SET balance = balance - $1 WHERE user_id = $2&quot;, {payment_amount, user_id} ); req.add_simple_query(&quot;COMMIT&quot;); } co_await conn.async_exec(req, ignore); // The connection is automatically returned to the pool when this coroutine completes co_return result.success ? error_code{} : error::payment_failed; }); I explicitly want to avoid having a connection_pool::async_get_connection() function, like in Boost.MySQL. This function returns a proxy object that grants access to a free connection. When destroyed, the connection is returned to the pool. This pattern looks great on paper, but runs into severe complications in multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state, thus needing at least an asio::dispatch to the pool’s executor, which may or may not be a strand. It is so easy to get wrong that Boost.MySQL added a pool_params::thread_safe boolean option to take care of this automatically, adding extra complexity. Definitely something to avoid. SQL formatting As we’ve seen, the protocol has built-in support for adding parameters to queries (see placeholders like $1). These placeholders are expanded in the server securely. While this covers most cases, sometimes we need to generate SQL that is too dynamic to be handled by the server. For instance, a website might allow multiple optional filters, translating into WHERE clauses that might or might not be present. These use cases require SQL generated in the client. To do so, we need a way of formatting user-supplied values without running into SQL injection vulnerabilities. The final piece of the library becomes a format_sql function akin to the one in Boost.MySQL. Final thoughts While the plan is clear, there is still much to be done here. There are dedicated APIs for high-throughput data copying and push notifications that need to be implemented. Some of the described APIs have a solid working implementation, while others still need some work. All in all, I hope that this library can soon reach a state where it can be useful to people.</summary></entry><entry><title type="html">Systems, CI Updates Q4 2025</title><link href="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q4 2025" /><published>2026-01-22T00:00:00+00:00</published><updated>2026-01-22T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/01/22/SamsQ4Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html">&lt;h3 id=&quot;doc-previews-and-doc-builds&quot;&gt;Doc Previews and Doc Builds&lt;/h3&gt;

&lt;p&gt;The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version.&lt;/p&gt;

&lt;p&gt;Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process.&lt;/p&gt;

&lt;p&gt;Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job.&lt;/p&gt;

&lt;h3 id=&quot;boost-website-boostorgwebsite-v2&quot;&gt;Boost website boostorg/website-v2&lt;/h3&gt;

&lt;p&gt;Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic.&lt;/p&gt;

&lt;p&gt;When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org.&lt;/p&gt;

&lt;p&gt;Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml.&lt;/p&gt;

&lt;p&gt;During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version.&lt;/p&gt;

&lt;p&gt;Meetings about CMS feature, other topics. Many general discussions about website issues.&lt;/p&gt;

&lt;h3 id=&quot;mailman3&quot;&gt;Mailman3&lt;/h3&gt;

&lt;p&gt;When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain.&lt;/p&gt;

&lt;h3 id=&quot;boost-ci&quot;&gt;boost-ci&lt;/h3&gt;

&lt;p&gt;Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org.&lt;/p&gt;

&lt;h3 id=&quot;jenkins&quot;&gt;Jenkins&lt;/h3&gt;

&lt;p&gt;Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews.&lt;/p&gt;

&lt;p&gt;Modified email notifications to send ‘recovery’ type messages after failed jobs.  Other enhancements to Jenkins jobs.&lt;/p&gt;

&lt;h3 id=&quot;boost-release-process-boostorgrelease-tools&quot;&gt;Boost release process boostorg/release-tools&lt;/h3&gt;

&lt;p&gt;When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz .&lt;/p&gt;

&lt;p&gt;Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU.&lt;/p&gt;

&lt;h3 id=&quot;drone&quot;&gt;Drone&lt;/h3&gt;

&lt;p&gt;Microsoft Windows - VS2026 container image.&lt;br /&gt;
Ubuntu 25.10 container image.&lt;/p&gt;

&lt;h3 id=&quot;gha&quot;&gt;GHA&lt;/h3&gt;

&lt;p&gt;Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation.&lt;/p&gt;

&lt;p&gt;Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c  Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.&lt;/p&gt;</content><author><name></name></author><category term="sam" /><summary type="html">Doc Previews and Doc Builds The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version. Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process. Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job. Boost website boostorg/website-v2 Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic. When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org. Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml. During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version. Meetings about CMS feature, other topics. Many general discussions about website issues. Mailman3 When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain. boost-ci Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org. Jenkins Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews. Modified email notifications to send ‘recovery’ type messages after failed jobs. Other enhancements to Jenkins jobs. Boost release process boostorg/release-tools When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz . Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU. Drone Microsoft Windows - VS2026 container image. Ubuntu 25.10 container image. GHA Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation. Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.</summary></entry><entry><title type="html">Containers galore</title><link href="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html" rel="alternate" type="text/html" title="Containers galore" /><published>2026-01-18T00:00:00+00:00</published><updated>2026-01-18T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html">&lt;p&gt;During Q4 2025, I’ve been working in the following areas:&lt;/p&gt;

&lt;h3 id=&quot;boostbloom&quot;&gt;Boost.Bloom&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written &lt;a href=&quot;https://bannalia.blogspot.com/2025/10/bulk-operations-in-boostbloom.html&quot;&gt;an article&lt;/a&gt; explaining
the usage and implementation of the recently introduced bulk operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostunordered&quot;&gt;Boost.Unordered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written maintenance fixes
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/320&quot;&gt;PR#320&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/321&quot;&gt;PR#321&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/326&quot;&gt;PR#326&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/327&quot;&gt;PR#327&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/328&quot;&gt;PR#328&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/335&quot;&gt;PR#335&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostmultiindex&quot;&gt;Boost.MultiIndex&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Refactored the library to use Boost.Mp11 instead of Boost.MPL (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/87&quot;&gt;PR#87&lt;/a&gt;),
remove pre-C++11 variadic argument emulation (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/88&quot;&gt;PR#88&lt;/a&gt;)
and remove all sorts of pre-C++11 polyfills (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/90&quot;&gt;PR#90&lt;/a&gt;).
These changes are explained in &lt;a href=&quot;https://bannalia.blogspot.com/2025/12/boostmultiindex-refactored.html&quot;&gt;an article&lt;/a&gt;
and will be shipped in Boost 1.91. Transition is expected to be mostly backwards
compatible, though two Boost libraries needed adjustments as they use MultiIndex
in rather advanced ways (see below).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostflyweight&quot;&gt;Boost.Flyweight&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Adapted the library to work with Boost.MultiIndex 1.91
(&lt;a href=&quot;https://github.com/boostorg/flyweight/pull/25&quot;&gt;PR#25&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostbimap&quot;&gt;Boost.Bimap&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Adapted the library to work with Boost.MultiIndex 1.91
(&lt;a href=&quot;https://github.com/boostorg/bimap/pull/50&quot;&gt;PR#50&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;other-boost-libraries&quot;&gt;Other Boost libraries&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Helped set up the Antora-based doc build chain for DynamicBitset
(&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/96&quot;&gt;PR#96&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/97&quot;&gt;PR#97&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/98&quot;&gt;PR#98&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Same with OpenMethod
(&lt;a href=&quot;https://github.com/boostorg/openmethod/pull/40&quot;&gt;PR#40&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Fixed concept compliance of iterators provided by Spirit
(&lt;a href=&quot;https://github.com/boostorg/spirit/pull/840&quot;&gt;PR#840&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/spirit/pull/841&quot;&gt;PR#841&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;experiments-with-fil-c&quot;&gt;Experiments with Fil-C&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://fil-c.org/&quot;&gt;Fil-C&lt;/a&gt; is a C and C++ compiler built on top of LLVM that adds run-time
memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. 
I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running
some benchmarks to measure the resulting degradation in execution times and memory usage.
Results follow:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Articles
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://bannalia.blogspot.com/2025/11/some-experiments-with-boostunordered-on.html&quot;&gt;Some experiments with Boost.Unordered on Fil-C&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://bannalia.blogspot.com/2025/11/comparing-run-time-performance-of-fil-c.html&quot;&gt;Comparing the run-time performance of Fil-C and ASAN&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Repos
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/joaquintides/fil-c_boost_unordered&quot;&gt;Compiling Boost.Unordered test suite with Fil-C&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c&quot;&gt;Benchmarks of Fil-C and ASAN against baseline&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c_memory&quot;&gt;Memory consumption of Fil-C and ASAN with respect to baseline&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;proof-of-concept-of-a-semistable-vector&quot;&gt;Proof of concept of a semistable vector&lt;/h3&gt;

&lt;p&gt;By “semistable vector” I mean that pointers to the elements may be invalidated
upon insertion and erasure (just like a regular &lt;code&gt;std::vector&lt;/code&gt;) but iterators
to non-erased elements remain valid throughout.
I’ve written a small &lt;a href=&quot;https://github.com/joaquintides/semistable_vector/&quot;&gt;proof of concept&lt;/a&gt;
of this idea and measured its performance against non-stable &lt;code&gt;std::vector&lt;/code&gt; and fully
stable &lt;code&gt;std::list&lt;/code&gt;. It is dubious that such container could be of interest for production
use, but the techniques explored are mildly interesting and could be adapted, for
instance, to write powerful safe iterator facilities.&lt;/p&gt;

&lt;h3 id=&quot;teaser-exploring-the-stdhive-space&quot;&gt;Teaser: exploring the &lt;code&gt;std::hive&lt;/code&gt; space&lt;/h3&gt;

&lt;p&gt;In short, &lt;code&gt;std::hive&lt;/code&gt; (coming in C++26) is a container with stable references/iterators
and fast insertion and erasure. The &lt;a href=&quot;https://github.com/mattreecebentley/plf_hive&quot;&gt;reference implementation&lt;/a&gt;
for this container relies on a rather convoluted data structure, and I started to wonder
if something simpler could deliver superior performance. Expect to see the results of
my experiments in Q1 2026.&lt;/p&gt;

&lt;h3 id=&quot;website&quot;&gt;Website&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Filed issues
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1936&quot;&gt;#1936&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1937&quot;&gt;#1937&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1984&quot;&gt;#1984&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;support-to-the-community&quot;&gt;Support to the community&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;I’ve been part of a task force with the C++ Alliance to review the entire
catalog of Boost libraries (170+) and categorize them according to their
maintainance status and relevance in light of additions to the C++
standard library over the years.&lt;/li&gt;
  &lt;li&gt;Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="joaquin" /><summary type="html">During Q4 2025, I’ve been working in the following areas: Boost.Bloom Written an article explaining the usage and implementation of the recently introduced bulk operations. Boost.Unordered Written maintenance fixes PR#320, PR#321, PR#326, PR#327, PR#328, PR#335. Boost.MultiIndex Refactored the library to use Boost.Mp11 instead of Boost.MPL (PR#87), remove pre-C++11 variadic argument emulation (PR#88) and remove all sorts of pre-C++11 polyfills (PR#90). These changes are explained in an article and will be shipped in Boost 1.91. Transition is expected to be mostly backwards compatible, though two Boost libraries needed adjustments as they use MultiIndex in rather advanced ways (see below). Boost.Flyweight Adapted the library to work with Boost.MultiIndex 1.91 (PR#25). Boost.Bimap Adapted the library to work with Boost.MultiIndex 1.91 (PR#50). Other Boost libraries Helped set up the Antora-based doc build chain for DynamicBitset (PR#96, PR#97, PR#98). Same with OpenMethod (PR#40). Fixed concept compliance of iterators provided by Spirit (PR#840, PR#841). Experiments with Fil-C Fil-C is a C and C++ compiler built on top of LLVM that adds run-time memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running some benchmarks to measure the resulting degradation in execution times and memory usage. Results follow: Articles Some experiments with Boost.Unordered on Fil-C Comparing the run-time performance of Fil-C and ASAN Repos Compiling Boost.Unordered test suite with Fil-C Benchmarks of Fil-C and ASAN against baseline Memory consumption of Fil-C and ASAN with respect to baseline Proof of concept of a semistable vector By “semistable vector” I mean that pointers to the elements may be invalidated upon insertion and erasure (just like a regular std::vector) but iterators to non-erased elements remain valid throughout. I’ve written a small proof of concept of this idea and measured its performance against non-stable std::vector and fully stable std::list. It is dubious that such container could be of interest for production use, but the techniques explored are mildly interesting and could be adapted, for instance, to write powerful safe iterator facilities. Teaser: exploring the std::hive space In short, std::hive (coming in C++26) is a container with stable references/iterators and fast insertion and erasure. The reference implementation for this container relies on a rather convoluted data structure, and I started to wonder if something simpler could deliver superior performance. Expect to see the results of my experiments in Q1 2026. Website Filed issues #1936, #1937, #1984. Support to the community I’ve been part of a task force with the C++ Alliance to review the entire catalog of Boost libraries (170+) and categorize them according to their maintainance status and relevance in light of additions to the C++ standard library over the years. Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</summary></entry><entry><title type="html">Decimal is Accepted and Next Steps</title><link href="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html" rel="alternate" type="text/html" title="Decimal is Accepted and Next Steps" /><published>2026-01-15T00:00:00+00:00</published><updated>2026-01-15T00:00:00+00:00</updated><id>http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html">&lt;p&gt;After two reviews the Decimal (&lt;a href=&quot;https://github.com/cppalliance/decimal&quot;&gt;https://github.com/cppalliance/decimal&lt;/a&gt;) library has been accepted into Boost.
Look for it to ship for the first time with Boost 1.91 in the Spring.
For current and prospective users, a new release series (v6) is available on the releases page of the library.
This major version change contains all of the bug fixes and addresses comments from the second review.
We have once again overhauled the documentation based on the review to include a significant increase in the number of examples.
Between the &lt;code&gt;Basic Usage&lt;/code&gt; and &lt;code&gt;Examples&lt;/code&gt; tabs on the website we believe there’s now enough information to quickly make good use of the library.
One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB.
It is a huge release (1108 commits with a diff stat of &amp;gt;50k LOC), but is be better than ever.
I expect that this is the last major version that will be released prior to moving to the Boost release cycle.&lt;/p&gt;

&lt;p&gt;Where to go from here?&lt;/p&gt;

&lt;p&gt;As I have mentioned in previous posts, the int128 (&lt;a href=&quot;https://github.com/cppalliance/int128&quot;&gt;https://github.com/cppalliance/int128&lt;/a&gt;) library started life as the backend for portable arithmetic and representation in the Decimal library.
It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like &lt;code&gt;&amp;lt;format&amp;gt;&lt;/code&gt; support. 
The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support.
This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support.
I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division).&lt;/p&gt;

&lt;p&gt;Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (&lt;a href=&quot;https://github.com/correaa/boost-multi&quot;&gt;https://github.com/correaa/boost-multi&lt;/a&gt;) library.
Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory.
Feel free to give the library a go now and comment on what you find. 
This is a very high quality library which should have an exciting review.&lt;/p&gt;</content><author><name></name></author><category term="matt" /><summary type="html">After two reviews the Decimal (https://github.com/cppalliance/decimal) library has been accepted into Boost. Look for it to ship for the first time with Boost 1.91 in the Spring. For current and prospective users, a new release series (v6) is available on the releases page of the library. This major version change contains all of the bug fixes and addresses comments from the second review. We have once again overhauled the documentation based on the review to include a significant increase in the number of examples. Between the Basic Usage and Examples tabs on the website we believe there’s now enough information to quickly make good use of the library. One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB. It is a huge release (1108 commits with a diff stat of &amp;gt;50k LOC), but is be better than ever. I expect that this is the last major version that will be released prior to moving to the Boost release cycle. Where to go from here? As I have mentioned in previous posts, the int128 (https://github.com/cppalliance/int128) library started life as the backend for portable arithmetic and representation in the Decimal library. It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like &amp;lt;format&amp;gt; support. The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support. This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support. I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division). Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (https://github.com/correaa/boost-multi) library. Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory. Feel free to give the library a go now and comment on what you find. This is a very high quality library which should have an exciting review.</summary></entry></feed>