Discussion about this post

User's avatar
Neural Foundry's avatar

The point about ChatGPT and Perplexity citing different sources for the same brand is fascinatin, really shows how you can't just optimize for one model. Your SEED framework makes sense but I'm wondering about the practicle challenge of maintaning that level of consistency across so many publications. For smaller brands with limited budgets, is there a minmum threshold of coverage that starts to teach LLMs effectively, or does it really require the kind of volume your examples show?

Expand full comment
2 more comments...

No posts

Ready for more?