LLM-assisted requirements analysis and scoping
07 Aug 2025
Table of Contents
- Q: How do you eat an elephant?
- A: One bite at a time.
- Q: How do you pick the first bite?
- A: Like this (keep reading).
I consulted with a company where some sysadmins would like to start automating their overnight and weekend upgrade processes, so that they can reclaim their time and improve reliability.
They’re thinking perhaps Ansible, and they asked me if I had thoughts about how to get started. I said that in my opinion, they could completely ignore learning anything at all about Ansible for a while.
What they seemed not to have had a chance to do yet was decide what parts of their upgrade processes they wanted to automate.
As Fayner Brack titled a blog post, “The problem you solve is more important than the code you write.”
I asked them to show me one of the Word documents they use when working long hours on a weekend to complete a system upgrade, and scroll through it, and tell me all about it.
📝✅
Sadly, their document contained too many potentially sensitive details to simply feed it into an LLM chatbot and ask the chatbot for its thoughts, which was where my mind jumped, though I knew it’d be a long shot and probably would be precisely that sensitive.
But guess what?
It turns out you don’t need to let an LLM see the exact original instruction manual.
You can get an amazing amount of value without it. Here’s how:
When I said:
“Wow, I wish I’d been recording you telling me about this document…”
It occurred me to me that … that’s exactly what you can do.
Step 1: record yourself talking about the instructions
You can open up a document that’s a bit too sensitive to feed straight into a chatbot, fire up Windows Sound Recorder, and comment on it.
What you’re essentially doing is using your own human brain and mouth as a censorship / privacy filter. You just throw away the recording and start over if you say something you shouldn’t have.
Blather on, stream-of-consciousness style, about:
- what the “point” / the “why” seems to be with respect to:
- the document overall
- a given section
- a given step
- various real-world hassles and frustrations that you’re noticing documented or that reading through the steps reminds you of
- the tangential automation daydreams that re-reading the document inspires
- rants like
- why a certain step is such a pain
- why it’s always this freaking vendor that has that stupid problem
- insecurities about knowledge gaps you’re afraid might make it hard for you to automate the documented process
- context you’d want to give a new hire or a manager, if trying to introduce them to this process
(Note: to see if the idea I’m blogging about in this article actually worked, I had a separate LLM generate an upgrade guide for an imaginary application called Overnight Upgrade Procedure: WidgetServer v5.2
. The imaginary Word document it made for me to do a recording about was so cute. The LLM even thought to write “For authorized sysadmins only; do not distribute” on line 2. 🥰)
Step 2: Transcribe the recording
Use a tool like the web version of Microsoft Word’s Dictate -> Transcribe
button to get a textual copy of the Windows Sound Recorder file you just made.
Step 3: Look in the LLM mirror
Now it’s time to let an LLM make sense of what you just rambled about incohesively.
They’re amazing at this. It’s the “language” part of “large language model.” This is where they shine.
Here’s an example prompt I’m a fan of:
This is a transcript of my ramblings:
“(insert copy of transcript here)”
Can you please:
- Give me a quick synopsis of what I was talking about?
- Show me a mind map of what I was talking about, with labels that are concise? (Maybe not even more than 1-2 words?)
- Suggest other ideas for subsequent prompts to you that seem like they might be helpful for having you help me parse “meaning” out of these ramblings? (FAQ? Study guide? Timeline? Flowchart? Presentation slide deck? Etc.?)
Step 4: Follow up with the LLM for “aha” moments
When I tried this against my “all about WidgetServer 5.2” stream-of-consciousness transcript, here are the “followup prompt” suggestions it made:
- (click expander icon at left – it’s a rightward-facing triangle – if the followup prompt suggestions below are not showing up)
- “Can you give me a concise summary of the key points and pain areas from my ramblings?”
- “What are the main steps and their purposes in this upgrade process?”
- “Can you create a mind map or flowchart of the process, using just a few words per step?”
- “What recurring problems or bottlenecks do you notice, and do you have suggestions for addressing them?”
- “Based on this transcript, what are some opportunities for automation or process improvement?”
- “Can you generate a checklist or quick-reference guide from this transcript?”
- “What questions should I ask my team or the vendor to clarify unclear steps or reduce risk?”
- “Can you identify any knowledge gaps or areas where I seem uncertain, and suggest resources or training?”
- “What follow-up prompts could I use to dig deeper into specific frustrations or ideas mentioned here?”
- “Can you help me turn this into a study guide or onboarding FAQ for new hires?”
They were all pretty great, so I asked it:
Can you please do all of those against
my-transcript.txt
?Let me know if that’s too big of a context window and you need me to break it up.
(You can also see
llm-prompt-ideas.txt
if you need to be able to more easily look at your suggestions – I copied and pasted them.)
Here was its response:
- (click expander icon at left – it’s a rightward-facing triangle – if its followup answers below are not showing up)
Here are responses to each of the suggested prompts, based on your streamofconsciousness.md. (No context window issues—your file fits fine!)
1: Concise summary of key points and pain areas
- Key Points:
The document is a practical, experience-based walkthrough of the WidgetServer upgrade process. It emphasizes not missing steps, verifying backups, using correct accounts, following the right order for stopping/starting services, and communicating with the team. - Pain Areas:
- Backups sometimes appear successful but are empty.
- Permissions issues (wrong account, vault access).
- Service restarts are unreliable and order-dependent.
- Manual intervention is often needed (e.g., Task Manager).
- Installer hangs and cryptic warnings.
- Logs are unhelpful for troubleshooting.
- Time pressure and risk of missing documentation or steps.
2: Main steps and their purposes
- Pre-flight:
Notify stakeholders, check backups, ensure remote access—prevents surprises and ensures recovery options. - Backup:
Use correct account and credentials to create a reliable backup—critical for rollback. - Stop Services:
Stop in a specific order to avoid hangs and issues—prepares system for upgrade. - Upgrade:
Mount ISO, run installer as admin, handle warnings—applies the new version. - Post-upgrade:
Restart services, run health checks, review logs—verifies upgrade success. - Wrap-up:
Confirm with users, document actions, archive logs—ensures accountability and compliance.
3: Mind map / flowchart (labels only)
- Pre-flight
- Backup
- Stop Services
- Upgrade
- Post-upgrade
- Wrap-up
4: Recurring problems/bottlenecks & suggestions
- Backups unreliable:
Automate backup verification (e.g., check file size, test restore). - Permissions/account confusion:
Standardize scripts to accept any admin account or improve error messages. - Service stop failures:
Script service shutdowns with error handling and logging. - Installer hangs:
Document known issues and workarounds; escalate to vendor if persistent. - Unhelpful logs:
Create a troubleshooting guide for common errors. - Manual steps:
Identify steps for automation (e.g., health check, log archiving).
5: Opportunities for automation/process improvement
- Automated backup verification and alerting.
- Scripted service stop/start with built-in retries.
- Automated health check and log summary email.
- Pre-flight checklist script (verifies access, backup, etc.).
- Centralized error code lookup for logs.
6: Checklist / quick-reference guide
- Email stakeholders about outage
- Verify last night’s backup (check file size)
- Confirm RDP/VPN access
- Log in as
admin_widget
- Run backup script, save to secure location
- Retrieve root passphrase from vault
- Reboot if backup fails with “Access Denied”
- Stop services in order: WidgetAppCore, WidgetDataSync, WidgetReporting
- Use override token if needed
- Mount upgrade ISO
- Run installer as Administrator
- Enter deployment key from vault
- Wait if installer hangs at 87%
- Start all services
- Check logs for WidgetReporting
- Run health check script
- Update reporting DB schema if needed
- Email health check log to self
- Confirm with test user
- Email team, update ticket
- Archive logs/backups
7: Questions for team/vendor to clarify unclear steps or reduce risk
- How can we reliably verify backup integrity before upgrades?
- Is there a way to make the backup script accept any admin account?
- Can we automate service stop/start and handle failures more gracefully?
- What’s the root cause of the installer hanging at 87%?
- Can the vendor provide clearer error messages or log documentation?
- Is there a recommended way to automate health checks and log archiving?
8: Knowledge gaps/uncertainties & suggested resources/training
- Unsure about best practices for backup verification—seek training or documentation on backup validation.
- Not confident in troubleshooting service stop/start failures—review Windows service management and scripting.
- Unclear on interpreting WidgetReporting logs—request vendor documentation or internal knowledge sharing.
- Unsure about automating manual steps—explore PowerShell scripting or automation tools.
9: Follow-up prompts to dig deeper
- “What are the most error-prone steps in this process, and how can I mitigate them?”
- “Can you suggest a PowerShell script to automate the backup verification?”
- “What are best practices for documenting upgrade procedures for new hires?”
- “How can I create a template for post-upgrade validation?”
- “What metrics should I track to measure upgrade success?”
10: Study guide or onboarding FAQ for new hires
WidgetServer Upgrade FAQ:
- Why do we check backups before every upgrade?
- What account should I use for backup scripts, and why?
- What’s the correct order for stopping services, and what happens if I get it wrong?
- What should I do if the installer hangs or throws warnings?
- How do I interpret health check results and logs?
- Who do I contact if I get stuck or something fails?
- Where do I archive logs and backups after the upgrade?
Let me know if you want any of these expanded or formatted for a specific use case!
Step 5: Dive deep with the LLM
That tenth idea – the FAQ – really caught my heart – and I told the LLM so:
The study guide / onboarding FAQ in number 10 has me intrigued.
Let’s say I’ve got Ansible or something like it available to me.
Do you see any inspiration in that study guide about how to think about “automating the boring stuff” with such a tool?
Like, are there some powerful wins there that’re perfect for such a tool?
Or is that study guide perhaps the source of the last place we’d want to look when trying to find pain points to automate, because they’d perhaps be the hardest to implement?
(With other parts of what we do being better places to look for first quick wins to reduce pain on upgrade weekends?)
Thoughts?
Note: this prompt is actually a nice example of how I tend to talk to LLMs:
- Name actual or desired emotion nouns/adjectives (“intrigued,” “inspiration,” “powerful,” “win”, “quick”).
- Tip: try consulting a Nonviolent Communication (“NVC”) emotions list.
- Really ham up the vulnerability, like I’m a total beginner and/or the LLM is my therapist (“pain,” “hard”, “perhaps,” “how to think,” “weekend”)
- (“Weekend” is a vulnerable word because it’ll infer that I’m desperate for work-life balance. Hopefully, my implied desperation will help the LLM look for opportunities that people have historically reported as being truly high-impact.)
- Present it two potential opposite answers to my questioned, softened by uncertainty modifiers like “such a,” “like,” and “thoughts?” that give it room to argue with me that I’m thinking about it all wrong and it’s actually a third answer.
- (Man, oh man, I ask LLMs “Thoughts?” a lot.)
Here was its response:
- (click expander icon at left – it’s a rightward-facing triangle – if its opinion below is not showing up)
Great question! The study guide/onboarding FAQ in #10 actually highlights several pain points that are excellent candidates for automation with Ansible or similar tools. Here’s how to think about it:
1: FAQ as Automation Inspiration
The FAQ lists steps that are:
- Repetitive
- Error-prone
- Require specific order or permissions
- Often forgotten or skipped under pressure
These are exactly the kinds of tasks where automation shines.
2: Powerful Wins for Automation
From the FAQ and the process, here are some “boring” but high-value automation targets:
-
Backup Verification:
Automate checking that the backup file exists, is recent, and is above a minimum size. Ansible can do this with file and stat modules. -
Account/Permissions Checks:
Ensure scripts run as the correct user, or at least fail fast with a clear error if not. -
Service Stop/Start in Order:
Use Ansible’swin_service
module to stop/start services in the right order, with retries and error handling. -
Installer Execution:
Automate mounting ISOs, copying files locally, and running installers as Administrator. -
Health Checks & Log Collection:
Run health check scripts, collect logs, and email or archive them automatically. -
Archiving Logs/Backups:
Move files to the correct archive location as part of the playbook.
3: FAQ = Automation Roadmap
The FAQ is not the last place to look for automation ideas—in fact, it’s a great starting point!
If a step is important enough to be in the FAQ, it’s probably a pain point or a common source of mistakes.
4: Quick Wins vs. Hard Problems
- Quick wins:
- Automating service management
- Backup verification
- Health check/log archiving
- Pre-flight checks (permissions, connectivity)
- Harder to automate:
- Interpreting ambiguous log errors
- Handling unexpected installer hangs (though you can alert/fail gracefully)
5: Summary
- FAQ = Automation To-Do List:
The FAQ is a goldmine for identifying what to automate first. - Start with repetitive, high-risk, or easily-scripted steps.
- Leave “judgment calls” or complex troubleshooting for later.
In short:
The FAQ is a great place to look for automation inspiration.
Start with the steps that are clear, repeatable, and high-impact—those are your quick wins with Ansible!
Look at that answer. It’s amazing!
I seriously hadn’t thought of that when I saw the “FAQ.” I genuinely was confused.
But it’s totally correct. (Well, that is, all of the many human geniuses that the LLM’s training data plagiarized are correct!)
- Easy: If something from the FAQ is about “error-proneness” or “boring-ness” or “easy to forget if emotions are present,” then yeah, that’s a yes for automation.
- Hard: On the other hand, something from the FAQ is about “inconsistency,” then heck no, don’t torment yourself trying to automate it.
And then look at its answer number four – it literally split the “FAQ” apart for me into “easy things from the FAQ” and “hard things from the FAQ.”
Step 6: Profit
Ta da!
Without ever showing the LLM an actual sensitive document, I not only got multiple different inspiring ways of breaking the document down into what was important about it, but it did my first round of requirements anlysis.
It made amazing suggestions about “what problems to solve:”
- 4 problems to try solving right away
- 2 problems to ignore for now
And that’s how I use LLMs to “design software” without giving any thought to “learning a new programming language.”
Heck, I could even ask the LLM to make me a PowerPoint slide deck if I want to present it all to my team and ask if they have any ideas about which automation idea they’d like to tackle first.
Pretty nifty, eh?
Notes on my prompting style
Here’s what an LLM has to say about the way I prompt, in case you’re curious:
- (click expander icon at left – it’s a rightward-facing triangle – if the style notes below are not showing up)
- Conversational, honest, and sometimes vulnerable.
- Clear about pain points, constraints, and emotions.
- Ask for practical, step-by-step help or code.
- Invite the AI to “think with you,” not just code for you.
- (I often say “don’t code; just think” or “don’t code; just help me think” if I’m working in a programming-oriented one.)
- Focus on security, maintainability, and real-world context.
Here’s the full list.
- You invite the AI to brainstorm, not just generate code.
- You describe your real-world pain point in detail.
- You ask for achievable, incremental steps.
- Your tone is conversational and open-ended.
- You express vulnerability and real concerns.
- You’re honest about your fears, which helps the AI give more relevant, empathetic advice.
- You use emotion to clarify the stakes.
- You ask for best practices and practical examples.
- You specifically request warnings about common pitfalls (you use the term “gotchas”).
- You ask it to focus on security and real-world application.
- You state your technical constraints clearly.
- You use informal language (“oof”) to convey frustration with a situation or with an LLM’s answer, or challenge an LLM’s answer.
- You ask for alternatives that fit your environment.
- You admit overwhelm, which helps the AI tailor its response.
- You’re candid about the complexity and messiness of real systems.
- You invite the AI to help simplify or organize.
- You ask for process design, not just fixes.
- You want to understand the “why” and “how,” not just the “what.”
- You’re looking for cleaner, more maintainable solutions.
- You request practical, focused code for a specific task.
- You want to verify and validate, not just automate blindly.
- You look for ways to automate repetitive manual steps.
- You’re focused on efficiency and reducing friction.
Keep using human project managers and business analysts
Writing this blog doesn’t make me a billionaire “published author” and having been on a podcast or two doesn’t make me a “radio star.”
Similarly, finally having tools that help you feel more comfortable in the realms of requirements analysis and scoping by yourself doesn’t reduce your need to consult a professional business analyst (“BA”) and/or project manager (“PM”).
LLMs can just help you arrive at meetings better prepared for conversations with human PMs and BAs.
- An LLM is just a bunch of zeroes and ones doing math to predict the next likely word in an English-language sentence. Garbage in, garbage out.
- PMs and BAs are the actual professionals. They’ll truly challenge your thinking in important ways. Your employer hired them for a reason. Trust their expertise.