
Generative Q&A flips the table
We're more out of control than ever and knowing that gives us back agency to influence
This is a long piece. You probably expected that after reading part one. (feel free to copy/paste this to an LLM and ask for a TLDR through your lens)
But here's what makes this different: while the first piece argued for comprehensive publications, this is an exploration of what happens when that content becomes conversational. And I'm not going to sugarcoat it.
The FAQ section on your website (on any website) is a lie.
The answers are right (yet overly polished). But the questions aren't frequently asked. They're nice to answer, they tell a story and they are designed to pull people in.
Real FAQs would look chaotic.
"What's your pricing for a mid-market company with legacy systems?"
"Do you integrate with NetSuite?"
"How long does implementation take if we can only allocate two people part-time?"
Instead, we get:
"What makes your solution different?"
"What industries do you serve?"
"How do I get started?"
These aren't the questions visitors have. They're the questions you want to answer.
What generative Q&A actually means
Generative Q&A eliminates the fiction that we know what people want to ask. And whether you do it on your website or not, it is being done as we speak in ChatGPT. Everyone you talk to has looked up your company and you personally on GPT. And they aren't following your 'FAQ' sequence.
People visit your website with an LLM-powered search interface, they're not navigating your information architecture. They're asking their actual question, whatever it is, and the system pulls from everything you (and others) have published to construct answers.
The visitor who asks about your approach to change management gets a different response than the visitor who asks about your technical architecture. Same knowledge base, different lenses, different answers.
Traditional search optimized for keyword matching. Information architecture tried to predict paths through content. Both assumed the organization controlled how information got accessed.
Generative Q&A assumes the questioner controls the lens through which information gets viewed and you get to smartly influence the outcome they are getting!
PR skills (influencing the influencer) are coming in handy.
The burden we've been placing on visitors
Think about what we currently ask people to do on websites.
We ask them to understand our mental model. Our organizational structure. Our product categorization. Our service taxonomy. We build navigation based on how we think about our business, then expect visitors to translate their questions into our framework.
"I need help with executive communications in a post-merger situation" becomes "Let me check Services... Communications... maybe Strategic Communications? Or is it under Advisory?"
Our logical breadcrumb paths and descriptive labels are a workaround for a fundamental limitation: we couldn't let people just ask their actual question. And we're so used to it now.
Now we can. And they will.
What happens to information architecture
This transforms what information architecture means. Rather than architecting paths for navigation, you're architecting completeness for querying.
The question shifts from "How do I organize information so people can find what they need?" to "How do I structure information so AI can extract what different people need?"
Traditional IA focused on hierarchy, categorization, and wayfinding. Generative IA focuses on coverage, relationships, and context richness.
Some pages on your site might never appear in your main navigation but become incredibly valuable in query results. They're not destinations you route people to, but context that helps construct better answers.
Let's get philosophical for a sec
Let's get philosophical for a sec, because this shift is deeper than just interface design.
We used to present information assuming the author knows best.
"Let me show you the five things you need to understand."
The author chooses sequence, emphasis, scope.
Conversational information assumes the questioner knows their context best.
"Let me ask about the three things relevant to my situation."
The reader chooses relevance, depth, direction. The knowledge base adapts to the reader's frame.
This isn't better or worse. It's fundamentally different epistemology. The table is flipped.
Presentational:
"Here's what I think you should know, in the order that makes sense to me."
Conversational:
"Here's everything I know, and you determine what matters and in what order."
It changes who has agency in the knowledge exchange.
What this means for how we publish
Most website content is written presentationally. We craft pages for sequential reading. We structure content with clear beginnings, middles, ends.
Conversational content needs different architecture.
Ideally you're building a knowledge graph that AI can traverse associatively. The reader might enter through any question, extract any insight, follow any connection.
Your job isn't to control the narrative. Your job is to make sure complete thinking exists so readers (and their LLMs) can construct their own.
This is what I mean by high-resolution information in practice. It's structured for associative retrieval rather than linear reading.
Where this shows up beyond websites
The same principle applies anywhere people seek information from organizational knowledge.
Internal documentation. Process guides. Training materials. Strategic frameworks.
We've been organizing all of this presentationally. Here's the handbook. Here's the process map. Follow these steps in this order.
But people don't have generic questions. They have specific ones.
"How do I handle this edge case?"
"What's the precedent for this scenario?"
Generative Q&A against comprehensive internal knowledge lets people ask their actual question instead of navigating someone's idealized learning path.
The analytics revolution hiding in plain sight
Here's something wild that seems obvious but isn't: when people ask questions of your content through generative interfaces, you finally discover what they actually want to know.
Traditional web analytics tell you what pages people visited. Search analytics tell you what keywords they used. Both are proxies for understanding intent.
Generative Q&A logs tell you the actual questions people ask.
"What's your approach to integrating with legacy systems when APIs aren't available?"
"How do you handle data privacy for European healthcare clients?"
"What happens if our internal champion leaves mid-project?"
These aren't search terms. They're real questions revealing real concerns. This is the data we've never had before.
My theory: Organizations that implement generative Q&A on their websites and analyze the questions will understand their market better than competitors who still rely on form submissions and sales calls to discover what prospects care about.
There is zero reason not to as 'controlling the FAQ' is gone anyway. They do it on your website or on ChatGPT anyway.
The expertise shift this requires
Building for generative Q&A requires different expertise than building traditional websites.
Traditional web development is about design, user experience, content strategy, technical implementation.
Generative systems need ontology design, semantic structure, context mapping, knowledge completeness. You need people who understand how questions relate to answers across multiple dimensions.
The bottleneck moved. When humans navigate websites, interface design is crucial. When AI retrieves information on behalf of humans, knowledge structure is crucial.
We're shifting from interface builders to knowledge architects.
What "conversational information" really enables
Discovery becomes personal. Ten people asking questions about your approach will get ten different entry points based on what matters to them specifically. You're not forcing everyone through the same front door.
Depth becomes navigable. You can publish comprehensive thinking without overwhelming anyone because they only encounter the depth they choose to explore. The executive gets the strategic view. The implementer gets the tactical details. Same source, different extraction.
Context becomes combinatorial. Questions can span categories you never connected in your navigation. "Your approach to change management in highly regulated industries with distributed teams" pulls from three different conceptual areas simultaneously. Navigation can't do that. Associative retrieval can.
This brings us back to the core insight from part one: different stakeholders need different things. Long form gives them the complete picture. Generative Q&A gives them the right lens. You're not choosing between comprehensive and accessible anymore—you're providing both simultaneously.
The timing question
The technology came first, and organizational readiness is just catching up.
Two years ago, putting a "chat with our website" function on your homepage felt gimmicky. It felt like those terrible rule-based chatbots everyone hated. Try to do this without GraphRAG. (I tried. It sucked.)
Now everyone's using ChatGPT and Claude daily. The interface pattern is getting familiar. The interaction model makes sense.
The enabling technology arrived in 2022. The cultural comfort is arriving in 2025.
Organizations that implement generative Q&A now are early, not experimental.
Where I'm seeing this go
Beyond customer-facing websites, this transforms how teams interact with organizational knowledge.
Imagine asking your company's strategy documents: "What's our position on AI integration in client deliverables?" and getting an answer synthesized from multiple strategy memos, meeting notes, and internal discussions.
Or querying your project documentation: "What tasks are blocking the Chicago office move and when's our next meeting about it?"—pulling current status from multiple systems in real time.
Your proposal library becomes searchable by intent: "Show me how we've positioned data governance for financial services clients in the past 18 months" surfaces the relevant sections from six different proposals instantly.
This is what happens when you build for conversational access to comprehensive information. And it's already possible.
So why long form
Generative Q&A only works if complete thinking exists to query. If you've compressed everything down to marketing bullets and summary pages, there's nothing for AI to extract when someone asks a specific question.
The organizations creating high-resolution information now are building the knowledge base that makes conversational access valuable later.
You can't have meaningful Q&A against sparse information. The answers will be generic, obvious, unhelpful. Long form creates the corpus. Generative Q&A makes it accessible.
These are two sides of the same shift.
A warning about what this isn't
You're not replacing marketing with a search bar. Or the search bar with an LLM conversation window.
What changes is how you can embed all of what you need to share in comprehensive content that different stakeholders extract value from in different ways, rather than forcing everyone through the same compressed message.
Most websites still assume presentational control. Their content is still optimized for navigation. Their information architecture predicts paths not optimizing for LLM queries.
Which creates a window for the organizations that move first.
When your prospect's AI assistant can have a substantive conversation with your knowledge base while your competitor's AI assistant hits dead ends in thin content, you win. When your team can query organizational knowledge conversationally while other teams navigate documents manually, you move faster.
My prediction: The past 20 years optimized for presentational efficiency. The next 20 will reward conversational accessibility. Your move.



