I run the script every 6hrs. When a candidate list is built, it has around 15 items + the prompt. That usually goes around 1k tokens. When generating an article, the prompt has 1k tokens and the response has around 1k tokens (I ask for text with up to 250 words). If the source article is long I just truncate it - it should be enough to create a summary presenting the topic. It is possible to batch article creation, submitting the prompt once alongside 2-articles, but I don't think that it would be worth the hassle. I'm getting billed 1c for every 12-15 articles generated (using gpt3.5turbo