Chandrasekhar said that the advisory is aimed at large and "significant platforms" and won't be applied to startups.
Chandrasekhar said that the advisory is aimed at large and "significant platforms" and won't be applied to startups.Former IT minister Rajeev Chandrasekhar discussed why AI models like ChatGPT and Google Gemini sometimes provide strange answers. In a post on X (formerly Twitter), he explained that these AI models are trained on large amounts of data that are not always of good quality. This poor-quality data can lead to incorrect information. He stated, 'LLMs (Large Language Models) 'bullshit content' comes from most models being trained on content/datasets that are — to politely use the phrase — NOT quality assured. That's why you have the embarrassing sight of billion-dollar Gemini/ChatGPT on many occasions spewing nonsense.'
Chandrasekhar used a common saying in programming: 'Garbage in, garbage out.' This means if the input data is bad, the output will also be bad. AI chatbots often scrape the internet for data, so if they find bad information, they give bad answers.
Sometimes, AI chatbots like ChatGPT and Google Gemini give wrong or confusing answers, a problem known as AI hallucinations. These chatbots might say things that don't make sense, are misleading, or even offensive. This happens because of issues like poor training data and complex models.
LLMs "bullshit content" comes from most models being trained on content/datasets that are - to politely use the phrase - NOT quality assured. Thats why you hv the embarssing sight of billion dollar Gemini/ChatGPT many occassions spewing nonsense.
— Rajeev Chandrasekhar 🇮🇳 (@RajeevRC_X) June 17, 2024
Garbage in, Garbage out is an… https://t.co/nvpqSL8Sst
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine