6 Comments
User's avatar
Dennis Berry's avatar

Grounding insights in verified sources, asking for evidence, and separating facts from interpretation are simple habits that dramatically improve decision quality

Neural Foundry's avatar

The "no single-source rule" is probably the most underrated point here. We ran into this exact trap last month when someone presented "market sizing" that turned out to be from one vendor whitepaper dressed up like research. The framework of treating AI as a researcher who can only cite what they read feels right, dunno why more teams don't default to that instead of treating LLMs like magic oracles.

John Brewton's avatar

Confidently wrong answers are more dangerous than obvious gaps.

Chris Tottman's avatar

At Notion AI has been an ace in the pack for research and due diligence. Still needs lots of critical thinking and reasoning from the team but the tooling is awesome

Loretta's avatar

well articulated! other suggestions would be to have the AI explain the whole thought; and also ask the AI to critique it’s own insights !

Michael Meneghini, MD's avatar

Yes, using it as a tool rather than just the endgame seems to be the best use case.