2023-05-17 Writing specifications with ChatGPT
Writing specifications with ChatGPT
Last week I wrote a couple of specifications for features of the ChatGPT integration into Composum Pages and tried to do use ChatGPT-4 as much as possible during this process. The first one, the content creation dialog , probably took quite some more time than writing it completely on my own since I had to find out a good approach and was also documenting that approach at the same time. But the other two features, translation and page categories , certainly went much faster than completely by hand. And it was quite a different experience and a lot of fun, too.
The most interesting thing about that was that it turned into a kind of rapid prototyping with very quick iterations, but not by writing a program but rather just by describing the general requirements and ChatGPT continuing the specification from that (including acting out what the user would do), even designing a dialog. For each spec I wrote a markdown file (which seems pretty much the native format for ChatGPT) that started with the 3 sections "basic idea", "basic implementation decisions" and "out of scope", described background information for the feature and a rough idea how it was supposed to work. Then follows a process where I again and again change or add to that file, then copy and paste the whole file into ChatGPT and ask a question or give an instruction to continue with a new section. A nice first question to ask is something like "Name 20 additional things we could extend this feature with." That can already give some nice ideas for the feature that you might want to incorporate into your idea or put into the "Out of scope" section.
I always very quickly got a good feedback by continuing the document with a section "User Workflow" with a text "To support the dialog design let's consider some typical user workflows. Some likely use cases for the feature are:" ChatGPT answers with some descriptions how a user might use the new feature. That normally wasn't what I envisioned, and I extended the basic idea, implementation decisions or out of scope sections until ChatGPT's output fits what I had in mind. Then I copied a few of these usage examples that seemed to best fit the feature into the document and edited it somewhat for clarity. This process already fixes some gaps in the original specification. Of course many of the ideas ChatGPT has would make good later extensions for the feature, so I collected these into an additional list.
In the hope that ChatGPT could generate the whole dialog in the end, I continued by prompting it to make a list of dialog elements, including a description of what each one of those would do. Again there was a quick iterative process that had me edit a bit in the first 3 "basics" sections until the element list was almost fine. Then I finalized it, and let ChatGPT generate a dialog structure in the next section, editing the prompt at the start of the section until it was fine. And then I let ChatGPT do a an actual dialog suggestion and repeated that until that seemed fine, too.
While ChatGPT is (currently) text based, I found 3 ways for it to show a dialog which you can look at. First, you can tell it to draw an ascii art. That's pretty rough but quick, and you get a good first idea. Then it can get it to actually output an SVG file that you can save into a file and look at in the browser or some other viewer. Looks much better, but that sometimes may have some bugs. Last but not least: letting it output an HTML fragment of a dialog works, too. It has the disadvantage that while both ascii art and SVG can be embedded as pictures into the Markdown document, you cannot really do that with HTML on GitHub - it does support HTML fragments, but annoyingly just swallows many of the important HTML elements like buttons, inputs, textareas and so forth. If you put that e.g. into IntelliJs Markdown editor it works fine, so that might depend on your tools. (Of course, this kind of wireframe isn't quite what you are used to professionally, but in my case that was enough, since it was me who is implementing that, anyway. :-)
Of course having ChatGPT doing this kind of dry runs doesn't beat e.g. making a paper prototype and trying that out with a couple of real users. But it's much faster and, well, sometimes you don't have a real user to ask. :-/
BTW: ChatGPT can also generate various types of graphs for using mermaid, e.g. mermaid user interaction diagrams (which is supported on GitHub and in IntelliJ, too). Now there is even a plugin for that, that can show you the diagrams directly in ChatGPT.
ChatGPT can even generate a couple of quite sensible testcases for you. I'm not sure how this would work out in other contexts as the basically one man project I'm currently doing, but what I got was more comprehensive than much of the documentation I've seen and most of it was either generated by ChatGPT or collected from ChatGPT output and adapted. I think a main point why this process is much faster than writing it yourself is that reading and collecting / editing the output is much faster than writing. And this process allowed me to think about the feature I'm going to implement from many sides before even starting to implement it, which certainly improves the implementation quality and saves some false attempts.
Generating the JSP for the dialog might also be worth a try. I put the specification for the dialog I'm doing into a chat, gave it the JSP of the dialog of another dialog as an example and asked it to create the JSP for the current dialog. In one case that worked almost perfectly, in the other case I gave up after a while. At least with writing specs I was very happy with what I got, and I really wouldn't want work without that kind of support in the future.
If you are interested, you can find that general approach and many of the prompts I used described in my Feature creation process . And, of course, you can have a look at the feature specifications I mentioned. If you have time to try something like that let me know what works for you and what doesn't!