This page describes how I do and don’t use artificial intelligence for my science writing.
The main things you need to know:
- I DO NOT use generative AI to produce text for any of my freelance articles or books.
- I DO use AI-based transcription software to quickly create typed summaries of audio. This saves me hours of (otherwise unpaid) work and helps me to capture all important details from interviews. It also ensures that I get the most accurate quotes from my sources.
- I MAY use generative AI as part of research or to come up with new ideas, but have rarely done so.
Here are some questions you might have:
Which AI-based audio transcription programmes do you use?
I use Otter to transcribe English-language audio and Amberscript for other languages (like Dutch)
How do you use the transcriptions generated by AI software?
I mainly use the text to help me recall the interview. I also highlight sections that I think could make good quotes, but I always listen back to the audio to make sure that the text is correct before I quote or reference anything in my article.
Do the transcription programmes use the interviews that you upload as training data?
Yes, that’s how the software learns and improves. Here are the data security and privacy policies for Otter and Amberscript that include information about encryption and my ability to remove content. Both companies are in jurisdictions with well-defined data privacy regulations (California and The Netherlands/EU respectively).
What is generative AI?
Generative AI is the type of AI that creates new text or images. It’s different from the type of traditional AI that, for example, finds patterns in microscope images.
Have you used generative AI for research?
I’ve only used ChatGPT for research once, and I think that case illustrates how I might use it again in the future. In that situation, I was asked to write an overview piece on the current state of precision medicine. The piece did not require interviews so I didn’t have an external expert to fall back on. I did the research myself by reading special issues and review articles, then made a list of the innovations I spotted in the literature. Only then did I ask ChatGPT for a list of trends. I saw that it had not found anything that I overlooked, so didn’t even need to use any of the content it generated. It was just a way to get a second pair of “eyes” on the research.
I’m very aware that ChatGPT and similar generative AI programmes can hallucinate and return completely made-up information, so I would never use a ChatGPT answer to any research question without fact-checking it thoroughly, and I certainly would never copy/paste its text.
Have you used generative AI for any type of content in the past?
I did use ChatGPT once in my own newsletter, in an issue that was about generative AI. I disclosed it in the text that time.
When generative AI-created images were still new, and before I realised that they relied on stolen art, I did create a few images that I would have shared online at the time.
Will you use generative AI to create content in the future?
I don’t plan to, but if that ever changes I will update this page.
Has anything you’ve written been used to train AI algorithms?
I don’t know for sure, but I would assume that it has. My writing has been online for decades so I’m sure it was picked up. I’ve asked ChatGPT what it knew about me, and in its early days it was hilariously wrong about that, but in a way that suggested that it had conflated “information about Eva” with “articles written by Eva”. So yes, I think my writing is in the training data. I don’t love it, but on the other hand I would prefer that it learns from a wide variety of good writing and for that it needs to grab well-edited articles.