Best practices for "QA" (quality assurance) of template documents; related idea

This is a request for best practices or perhaps, ultimately, a feature request…

Clausebase is remarkably powerful and of course, as a result, complicated to use. The complexity of coding introduces plenty of opportunities for errors and thus, the need for a really good “QA” (quality assurance) process. But QA’ing templates is very challenging because you only see the text that generates for the particular set of conditions one selects (i.e., when answering questions in the Q&A). To repeatedly go back to the Q&A and regenerate different versions of the document with all the possible permutations of datafield choices would be pretty much impossible. But somehow, one has to see all the different generated versions of the document in order to see that the language renders correctly, without typos, coding errors, or other issues.

Are there any best practices to do this that I might not be aware of?

Without the ability to QA all the document possibilities in an efficient way, there is no way to be confident that all versions of a document can be generated without errors. To put it simply: we are taking a lot of time QA’ing our documents and we are still uncovering errors in our work. It’s proving very difficult to find all the issue during the QA / testing process, even as that process takes us a long time.

To take a simple example, an equity agreement might have a single-trigger acceleration provision and a double-trigger provision. One needs to review both. Sure, one could change the Q&A and regenerate but that’s time-consuming. Now multiply that by all of the different clause and datafield possibilities.

What might work better is if Clausebase had some kind of QA mode that outputs all the possible different clauses that could be generated in a document.

It’s somewhat the same issue for datafields. I’d want to see that each choice for a datafield, to ensure that the choices are coded correctly. So, to take the above idea further, the generated document in my proposed QA mode could have a placeholder for each datafield and show (e.g., in a footnote) each of the text options that could be generated for that field.

I’m not sure if there’s a complete and total way to do what I am suggesting, but I think in order for us to be able to create templates efficiently, we need better QA tools than we currently have. As things stand now, we are struggling to arrive at templates that we feel totally confident about. We eventually get there, but I wouldn’t say that we do so in a time-efficient manner. And of course, time is a precious commodity in our business.

1 Like

I totally agree, this would be incredibly useful and save so much time!

1 Like

Hi Kenneth (& Darina),

It’s not the first time this question comes up. What you are encountering, is indeed also encountered by everyone who is creating templates with hundreds of small variations.

Technically speaking, it’s actually not so difficult for us to create an exhaustive list of versions that reflect all your possible variations in a document. However, in practice, those exhaustive versions are of an entire document are not actually very useful for power users like you, because these versions easily run into hundreds if not thousands of different versions.

In fact, we have already built two kinds of tool in this regard, for individual paragraphs: see Assemble Document → Misc → Uncover, if you have selected a (sub)clause with variations:

This tool allows you to see all the possible variations, and even interactively play with them. For complex clauses, you will see that the number of variations can easily become daunting; imagine what this can mean for an entire document…

Similarly, have you already experimented with the Focus tool in Assemble Document? During editing, it allows you to immediately “simulate” different versions of a paragraph, with changes being immediately reflected, without you having to constantly hit the Save button. See the documentation for all the options.

Finally, have you tested the “debugging” tools inside the Simulation of Design Q&A? It allows you to see which questions & changes are enabled/disabled, as well as the underlying reasons. This allows you to interactively understand why certain parts of your Q&A are (not) active.

I will have another discussion internally about this topic, but could I perhaps also ask you which practical tools you think would actually help you? Perhaps we can get some inspiration to come up with something better than our existing tools, because mere “exhaustive variations” is probably not what you want.

On a side note: this is also an unresolved problem in the field of software development (which has a lot of similarities with document automation). While small-scale testing (“unit testing”) is easily done, the reality is that — despite intensive research since the 1960s — so-called “end-to-end” testing of software, as well as the testing of graphical user interfaces, is usually still done with human testers for the most part. There are some automation tools that help the testing process, but they are difficult to maintain.

As a follow-up to my own post:

We just released a small tweak to the Q&A simulation (currently available for the US, Belgian, Dutch & German servers, rest coming next weekend). Please have a look at the following video that illustrates the additions:

Essentially, you can now

  • configure the Q&A Simulation mode to always show changes (instead of having to Alt-click or having to explicitly invoke “Show last changes” in the “…” menu).
  • hide everything but the changed paragraphs

Especially this last one is probably very useful during test.

Let me know what you think (it’s in a kind of test mode this week), I hope you like it!


Hi everyone,

I’m in a similar boat as the others, creating long documents with many variations.

@mtruyens I was not aware of some of these QA tools yet, this is a good reference post! If I think of any other QA tools that could help, I will add them here.

I really like the “Show only changes” addition, this is very useful for my workflow. From brief testing, I have encountered a bug where certain Q&A questions show a completely blank preview (except for the yellow banned), whilst most show only the changed clauses as intended. If I toggle “Show only changes” to off, I can see the clauses get highlighted correctly in the preview.


Maarten, Thanks for all this. As usual, you’ve given this some meaningful thought. I can appreciate how this is a more general issue in software development, and a challenging one at that!

I would have to defer to @ElmerThoreson regarding the existing debugging tools, like Uncover, Focus, and Simulate. I imagine these would help Elmer and other Clausebase coders check their own work and yes, we should make sure they are using these tools effectively to make template drafts as accurate as possible. However, these tools would be of less use to those in the “user” role like me, who interact mostly with the documents that are generated and outputted to Word and less with the Clausebase coding interface directly.

In order for me and the “average user” to check the quality of output, I would generally want to see it in Word, which is why I suggested the solution that I did. What’s better about the full “dump” of clauses and all datafield choices is that one would not have to select anything or look at different versions; it would be a single, comprehensive view.

I can appreciate that in a full Word dump, one would see all the clause text but one still might not be able to assess the accuracy of the conditions causing the clause text to appear or not appear – because everything simply appears in the dump. My suggestion here would be for each Clause to be annotated (e.g., in footnotes or comments, or using braces) with information showing its controlling dependencies. E.g., the single-trigger acceleration of vesting provision would be annotated with [If equity^vesting is TRUE and if equity^vesting_acceleration is “single”] and the double-trigger acceleration of vesting provision would be annotated with [If equity^vesting is TRUE and if equity^vesting_acceleration is “double”].

It would be kind of like an “x-ray” into the entire underlying document, presented in a textual format so that a QA reviewer who is not necessarily a Clausebase coder could understand and evaluate it. It is not so much a list or run of different versions, as you put it, since – yes – there could theoretically be hundreds of different combinations, and to look at all those would be very time-consuming and repetitive. I’m thinking more in terms of a single document that reveals all the underlying text and logic of a template.

But yes, even this could get ugly to look at, for clauses that have a lot of internal conditions. I can’t figure out any way around that fact.

I hope this helps explain what I have in mind. I am glad that others are weighing in and would be interested in what else they have to say.


1 Like

Some interesting stuff here.

First, I will say I just recently have come to understand the power of Focus. I use it on almost every clause that I draft now because it is a powerful tool for checking conditional programming.

Second, I was unaware of these other tools and plan to test them out (along with the new features listed above). It would have been helpful to know about these tools previously. I think this might indicate a bigger problem that ClauseBase is facing - communicating features to users. The system is powerful and complex, but many users are unaware of certain features until there is a help thread like this one. Perhaps offering a training course or having a series of training videos about these features would be helpful. I often view your training videos via the help window in a piecemeal way as I work through different issues. Still, it would be helpful if you had either some longer videos or a curriculum that could be more easily reviewed.

Third, I think the tool that Ken is envisioning would be helpful. If you are giving it some thought internally, I would remind you that for such an X-ray to be useful it would have to include both conditional programming and any changes made by Changesets in the Q&A. Perhaps, color-coding would be helpful if you pursued that, so the users could tell which changes were put in place in assemble document mode and which changes were made by the Changesets.

I imagine a feature like that would take a while to develop, so I was curious about practices people may use.

For example, we work with both corporations and LLCs, so our documents need to have different verbiage and some different substance based on which type of entity is involved. When testing I create an answer file as a corporation and generate a document, then I change the answer file to reflect an LLC. Then I can run a comparison to check to see if the language changes are working correctly. I do the same with other substantive areas that we are testing. Of course, this becomes very complicated when you address datafields with many possible inputs.

I keep coming back to the idea that there may be a distinction between tools that the programmers themselves use to create and check their code (“debugging” tools?) and the ways in which a member of the product team or a user-tester might use the actual “product” to test it out.

An end-user (e.g., let’s say a law firm partner) might have no ability to go into Clausebase and use the tools you have set up there in order to examine different combinations and selections of langauge. For the partner, she wants to see and review the output.

The analogy between document automation and programming is interesting. One of the issues I have with software today is the annoyance of time commitment of constant updates and patches. No doubt this is due to the complexity of the code of operating systems and apps. With my automated documents, I really don’t want it to work that way, where we occasionally or maybe more than occasionally find little bugs and errors that we have to correct. My hope for automation has always been that we would invest in careful coding and reasonable QA up front, and we come out with clean, error-free documents – at least until the next update that we want to do.

Part of getting to that might be better QA processes and tools but I have taken to heart your message, Maarten, that avoiding unnecessary complexity is another way that we can get there.

Good discussion, and thanks for the input! You are right that more tutorials and more “walkthroughs” would be welcome, we are aware of this.

I really want to stress that we have already experimented with exhaustive output versions, and that it results in really clumsy output.

To be very practical, think about how you would visualize the following clause, in order to make it understandable for the end-user (law firm partner):

  • How would you communicate towards the end-user what special functions such as @fullmonth and @a-or-an mean? What if they are stacked together, such as @fullmonth(@month-of(@today)))? What about something like @art-def(#buyer")? @enumerate? I suspect that the average law firm partner will scream when having to review such weird things.
  • In the example above, there are conditions within conditions. Would the entire paragraph have to be exhaustively repeated? But this may get very length. Or perhaps only exhaustively print the various options, with some indentation?

Or have a look at the following condition:

For a ClauseBase author, this is a fairly easy condition. But can you expect the law firm partner to understand what this means? That she knows about the different types of options that exist, in addition to “corporation”?

Under Misc > Uncover in Assemble Document, we visualize this as follows, but as you can see these kinds of explanations can easily get lengthy:

1 Like

Elmer’s comment

“I would remind you that for such an X-ray to be useful it would have to include both conditional programming and any changes made by Changesets in the Q&A”

actually triggered an idea. Let me come back to this.

We just added a toggle that allows you to remove all questions that are not related to the selection at the right side.

This allows you to very quickly see only the questions that can somehow impact the selection. We have been trying it here, and it actually works very well to let non-authors perform tests.

Have a look at this video, in which this new feature is pre-enabled. The video shows a sample Q&A that contains many different questions — as before, all questions are shown when nothing is selected at the right side. However, as soon as you select something at the right side, the questions at the left side get immediately filtered.

It’s not exactly an “X-Ray” (because, as explained above, it remains difficult for a non-author to “peek” into the black box without being confronted with strange-looking coding things), but this comes reasonably close.

Coming this Sunday to all servers!

1 Like


is there any way to make this option available for End-Users? Whilst I can see the new option in the “Simulate” mode of Design Q&A, I don’t see it in “Fill Out”.

This would be especially useful for Questions that affect more than one thing: clicking the question name only jumps to the first change, not any subsequent ones (as far as I am aware). I have found this to be rather confusing in my user testing.

Selecting “Show Last Changes” in the User “Fill Out” de-selects after making a single change, which is not that useful.


We have exactly the same idea!

We would like to give it a few weeks, to make sure that the template authors don’t encounter weird things. If so, then we will make this available to the end-users as well (probably making it an option that can be enabled or disabled, depending on the type of document or user).


Update: the feature to only show relevant questions will be available as of Monday.

  • By default, end-users can activate it in a Q&A, but it is not on by default. However, in the Options of a Q&A, you can configure the default:

  • There’s a new button “Changes” available to users to configure the options. All options can be independently enabled/disabled.

  • There’s a new right “Manage how changes are shown” that can be disabled on a per-user basis. (By default, users are allowed to use it.)


Curious if anything more has transpired with regard to this QA conversation. I love a lot of the document review and querying functionality that you’ve been adding lately with the assistance of AI. I’d still love to see AI be used to power some kinds of automated QA.