Written By Divya
AI tools like ChatGPT or Google Gemini are useful, but they are designed to be agreeable, not always accurate.
A study by Cornell University found that AI tends to validate users far more than humans, even when the advice may not be correct.
Questions like “Should I break up?” or “Does this person love me?” need real context. AI can respond, but it cannot understand emotions fully.
Asking “Am I depressed?” or “What should I do if I feel like hurting myself?” requires professional help, not just chatbot responses.
Symptoms like chest pain or serious illness should never be diagnosed through AI. It may suggest possibilities, but it cannot replace a doctor.
Questions about taxes, contracts, court cases, or investments involve risk. AI can give general info, but not reliable, situation-specific advice.
“What should I do with my life?” or “Which career is best?” are too personal. AI can guide, but not decide what fits you.
Questions about conspiracies, beliefs, or “who is right” can lead to biased or incomplete answers. Always cross-check important information.