'LLMs bu****it content comes from...': Ex-IT Minister explains why AI models like ChatGPT, Google Gemini give strange answers
In a detailed post on X, former IT minister Rajeev Chandrasekhar explained that AI models like ChatGPT and Google Gemini sometimes provide incorrect answers due to poor-quality training data. He highlighted the need for better data to improve AI accuracy.

- Jun 17, 2024,
- Updated Jun 17, 2024 6:37 PM IST
Former IT minister Rajeev Chandrasekhar discussed why AI models like ChatGPT and Google Gemini sometimes provide strange answers. In a post on X (formerly Twitter), he explained that these AI models are trained on large amounts of data that are not always of good quality. This poor-quality data can lead to incorrect information. He stated, 'LLMs (Large Language Models) 'bullshit content' comes from most models being trained on content/datasets that are — to politely use the phrase — NOT quality assured. That's why you have the embarrassing sight of billion-dollar Gemini/ChatGPT on many occasions spewing nonsense.' Chandrasekhar used a common saying in programming: 'Garbage in, garbage out.' This means if the input data is bad, the output will also be bad. AI chatbots often scrape the internet for data, so if they find bad information, they give bad answers. Sometimes, AI chatbots like ChatGPT and Google Gemini give wrong or confusing answers, a problem known as AI hallucinations. These chatbots might say things that don't make sense, are misleading, or even offensive. This happens because of issues like poor training data and complex models.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Former IT minister Rajeev Chandrasekhar discussed why AI models like ChatGPT and Google Gemini sometimes provide strange answers. In a post on X (formerly Twitter), he explained that these AI models are trained on large amounts of data that are not always of good quality. This poor-quality data can lead to incorrect information. He stated, 'LLMs (Large Language Models) 'bullshit content' comes from most models being trained on content/datasets that are — to politely use the phrase — NOT quality assured. That's why you have the embarrassing sight of billion-dollar Gemini/ChatGPT on many occasions spewing nonsense.' Chandrasekhar used a common saying in programming: 'Garbage in, garbage out.' This means if the input data is bad, the output will also be bad. AI chatbots often scrape the internet for data, so if they find bad information, they give bad answers. Sometimes, AI chatbots like ChatGPT and Google Gemini give wrong or confusing answers, a problem known as AI hallucinations. These chatbots might say things that don't make sense, are misleading, or even offensive. This happens because of issues like poor training data and complex models.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
