The "Rehearsal Partner" Risk: Why AI is No Substitute for Human Expertise

In the digital hallways of the 2020s, a new trend has taken hold among Generation Z. No longer just a tool for coding or drafting emails, ChatGPT has become a "rehearsal partner" for life. Whether practicing a difficult conversation with a boss or seeking a quick fix for a nagging health symptom, young professionals are turning to AI for guidance rather than just productivity. But as this technology becomes an everyday companion, we must ask: At what point does a helpful tool become a digital danger?

The 2.5 Billion Prompt Reality

Every 24 hours, an estimated 2.5 billion prompts are processed by AI systems. This staggering volume speaks to our growing reliance on instant answers. For a generation that prizes efficiency, the "always-on" nature of an AI therapist or career coach is tempting. However, this scale creates a massive playground for misinformation. Unlike a licensed professional, an AI doesn't "know" facts; it predicts the next likely word in a sentence. When that prediction involves medical advice or legal strategy, the stakes aren't just high—they are life-altering.

Why "Real Life" is Too Messy for an Algorithm

The fundamental flaw in treating a chatbot as an advisor is that AI lacks context. It doesn't know the  history. It hasn't seen medical charts of patients or lived through unique family dynamics. It doesn't understand patient's dilemmas. It cannot feel the weight of a moral choice or the nuance of a workplace conflict. It "hallucinates" because it operates on probability, it can confidently invent "facts" that sound reasonable but are dangerously wrong.

The Professional Boundary: Doctors, Lawyers, and Therapists

There is a dangerous trend of using AI for "quasi-professional" advice. AI is not a substitute for expertise. In medicine a diagnosis must come from a doctor. AI lacks the diagnostic accuracy and accountability required to manage human health. In Nepal, this situation is particularly complex because of the Digital Literacy Gap. The "digital divide" isn't just about owning a smartphone; it’s about a lack of digital health literacy and a neglect of proper education implementation. Most models are trained on English data. Localized symptoms in Nepali can be misinterpreted, leading to generic or dangerous advice. Chatbots often recommend OTC medications (like specific brands of Ibuprofen) that may not be available or are sold under different names in Nepal. Unlike a local health post worker, an AI has no legal accountability if its advice leads to a patient’s condition worsening.

 

In Law Legal nuances vary by jurisdiction, and minute details are often missed by AI. Most models are trained on Western common law. When a user in Nepal asks for advice, the AI might apply US principles to the Muluki Ain or Nepalese Criminal Codes. A bot might claim a contract is valid based on California law, while in Nepal, it may lack the necessary stamp duty or local registration, making it legally "dead." If a human lawyer gives bad advice, they can be sued for malpractice. If an AI fails you, there is no legal recourse.
In Mental Health the danger here isn't that the bot is "mean," but that it is "hollowly helpful." Sentiment vs. Subtext: AI uses "sentiment analysis" to respond with scripted warmth, but it cannot sense the hesitation in a breath or the cultural weight of social stigma in a Nepali household. In a crisis, an AI follows a logic tree. If a user's response doesn't fit the "expected" input, the bot may fail to trigger a life-saving intervention. A bot might suggest "setting boundaries"—a Western individualist concept—that could lead to more conflict in a traditional Nepali joint family setting.

 

We should absolutely embrace AI for what it is: a brilliant engine for brainstorming, a way to overcome "blank page syndrome," and a powerful efficiency booster. But we must stop short of turning these bots into our companions or our primary decision-makers. Use the bot to draft the email, but use a human to navigate the relationship. In a world of 2.5 billion daily prompts, the most valuable thing obe possess is the discernment to know when to put the phone down and talk to a professional who actually knows patient name—and the story.  AI is a rehearsal partner, not the lead actor in life. For the high-stakes decisions, trust the people who have the lived experience to match one's own.

 

A Path Forward for Nepal

The National AI Policy 2025 and the Digital Nepal Framework are starting to address these gaps by encouraging the development of LLMs trained on Nepali medical and legal datasets. NGOs are implementing programs to teach citizens that AI is a "research assistant," not a "doctor." Regulatory Guardrails are used by establishing a National AI Centre to monitor AI use in public service delivery.

 

The Bottom Line: Tool, Not Teacher

We should embrace AI as a brilliant engine for brainstorming and overcoming "blank page syndrome." But we must stop short of turning these bots into our primary decision-makers.

Use the bot to draft the email, but use a human to navigate the relationship. In a world of billions of daily prompts, the most valuable thing you can possess is the discernment to know when to put the phone down and talk to a professional who actually knows your name—and your story. AI is a rehearsal partner, not the lead actor in your life.