































Elon Musk’s startup xAI has imposed restrictions on the image-generation feature of its Grok chatbot on X after the tool faced backlash for creating sexualized images. Previously, users could instruct Grok to edit photos of people, including removing clothing and placing them in sexualized poses—often without consent—and the chatbot would publish these images in replies on the platform.
As of Friday, Grok informed users that image generation and editing are now limited to paying subscribers. This appears to have stopped the bot from automatically creating and posting such images in response to user comments, though users can still generate sexualized images through the Grok tab on X or the standalone Grok app without a subscription and post them manually.
When asked for comment, xAI responded to Reuters with an automated message saying, "Legacy Media Lies," while X did not immediately reply. Musk previously stated that anyone using Grok to create illegal content would face the same consequences as uploading such material directly. A Reuters reporter tested Grok by asking it to transform a photo into a bikini-clad version; the bot declined, noting the feature is subscription-only.
The European Commission criticized the restrictions, saying limiting access to paying subscribers does not address the core issue of unlawful sexualized images. Other governments and regulators have also condemned the content and opened inquiries, pressuring the platform to prevent and remove illegal material. German media minister Wolfram Weimer described the flood of semi-nude images as the “industrialization of sexual harassment.”
Disclaimer: This image is taken from Reuters.

The Indian government has deemed social media platform X's reply inadequate in addressing concerns over its Grok AI chatbot producing obscene and sexually explicit material, particularly content targeting women and children. The Ministry of Electronics and Information Technology (MeitY) issued the sharp rebuke following X's submission of details on its content moderation policies.
The issue surfaced on January 2, 2026, when MeitY sent a formal notice to X (formerly Twitter) after reports emerged of users exploiting Grok to generate vulgar images via fake accounts. X responded by outlining its takedown processes and adherence to Indian IT Rules, 2021, which mandate prompt removal of illegal content. Officials, however, found the explanation lacking depth on preventive measures and specific actions taken.
Developed by Elon Musk's xAI and integrated into X, Grok relies on massive datasets scraped from the web, making it prone to replicating harmful biases without strong guardrails. Users have circumvented existing filters—such as prompt blocks and AI classifiers—using sly workarounds, leading to non-consensual deepfakes and even sexualized depictions of minors, including references to characters like those from "Stranger Things."
The backlash extends beyond India, with regulators in the UK, Malaysia, and France launching probes into Grok for child exploitation risks and privacy violations. Critics highlight X's failure to implement robust "nudification" restrictions, equating AI-generated abuse to direct uploads under platform penalties.
MeitY now demands comprehensive clarifications from X, potentially including enhanced human moderation, region-specific filters, and proactive AI training. Failure to comply could invite fines or operational curbs, signaling to global tech giants the rising cost of lax content controls in the AI era.
Disclaimer: This image is taken from NDTV.

At CES 2026 in Las Vegas, Nvidia kicked off the event with a series of major announcements, unveiling new AI hardware platforms, open AI models, and expanded initiatives in autonomous driving, robotics, and personal computing. CEO Jensen Huang confirmed that the company’s next-generation Rubin AI platform is now in production and outlined plans to scale AI across consumer devices, vehicles, and industrial systems throughout the coming year.
One of the most significant reveals was Rubin, Nvidia’s next-generation AI computing platform and the successor to its Blackwell architecture. Rubin is Nvidia’s first “extreme co-designed” platform, meaning that its chips, networking, and software are developed together as a single system rather than separately. The platform, now in full production, is designed to dramatically reduce the cost of generating AI outputs compared to previous systems. By combining GPUs, CPUs, networking, and data-processing hardware, Rubin can efficiently handle large AI models and complex workloads.
Nvidia introduced a new AI-focused storage system aimed at improving the performance of large language models, allowing them to manage long conversations and extensive context windows more efficiently. This enables faster responses while using less power, enhancing overall AI performance.
Nvidia also showcased its growing portfolio of open AI models, trained on its supercomputers and available for developers and organizations to build upon. These models are organized by application, spanning healthcare, climate research, robotics, reasoning-based AI, and autonomous driving, providing ready-to-use foundations that can be customized and deployed without starting from scratch. The goal is to accelerate the appearance of AI features in apps, vehicles, and devices by enabling developers to build on existing models rather than creating new ones entirely.
A major focus of Nvidia’s presentation was what it calls physical AI, where AI systems interact with the real world through robots, machines, and vehicles. Nvidia demonstrated how robots and machines are trained in simulated environments before deployment in real-world scenarios. These simulations allow the testing of edge cases, safety protocols, and complex movements that would be challenging or unsafe to recreate physically. At the heart of this effort is Nvidia’s new Cosmos foundation model, trained on videos, robotics data, and simulations. The model can generate realistic videos from a single image, synthesize multi-camera driving scenarios, model edge-case environments from prompts, perform physical reasoning and trajectory prediction, and drive interactive, closed-loop simulations.
Nvidia introduced Alpamayo, a new AI model portfolio specifically designed for autonomous driving. It includes Alpamayo R1, the first open reasoning VLA (vision language action) model for autonomous vehicles, and AlpaSim, a fully open simulation blueprint for high-fidelity autonomous vehicle testing. These models process camera and sensor data, reason about driving scenarios, and determine appropriate vehicle actions, allowing autonomous vehicles to handle complex situations, such as navigating busy intersections without prior experience. Nvidia confirmed that Alpamayo will be integrated into its existing autonomous vehicle software stack, with the first implementation set to appear in the upcoming Mercedes-Benz CLA.
Disclaimer: This image is taken from Nvidia.

Nvidia is rushing to meet strong demand for its H200 AI chips from Chinese tech companies and has approached Taiwan Semiconductor Manufacturing Co (TSMC) to increase production, according to sources. Chinese firms have reportedly placed orders exceeding 2 million H200 chips for 2026, while Nvidia currently has just 700,000 units in stock. The precise number of additional chips Nvidia plans to order from TSMC is unclear, though production is expected to begin in the second quarter of 2026.
This surge in demand raises concerns about potential tightening in global AI chip supplies, as Nvidia must balance Chinese orders with limited availability elsewhere. Risks also remain because Beijing has not yet approved H200 shipments, though U.S. export restrictions were recently eased. Nvidia has set prices for Chinese clients at roughly $27,000 per chip, with variations depending on volume and arrangements.
The H200, part of Nvidia’s older Hopper architecture and built on TSMC’s 4-nanometer process, includes both standalone H200 chips and GH200 superchips combining the Grace CPU with Hopper GPUs. Initial orders are expected to be fulfilled from existing stock, with deliveries planned before the Lunar New Year.
Most of the Chinese demand comes from major internet companies seeking a significant performance boost over previous chips. Eight-chip modules are priced at around 1.5 million yuan, offering better value than the discontinued H20 module and grey-market alternatives. For example, ByteDance could spend roughly 100 billion yuan on Nvidia chips in 2026 if sales are approved. Regulatory uncertainty remains, as Chinese authorities weigh allowing H200 imports while promoting domestic AI chip development. One potential condition under consideration would require bundling H200 purchases with a portion of domestically produced chips.
Disclaimer: This image is taken from Reuters.



This year, Nanyang Technological University (NTU) flagged three students for academic misconduct, alleging that they relied on generative AI tools in their assignments. What boundaries should govern AI usage, at what point does it become misconduct, and is it time to rethink how assignments are structured and evaluated? Steven Chia and Otelli Edwards discuss these questions with Associate Professor Ben Leong, director of the AI Centre for Educational Technologies at NUS, and Jeremy Soo, co-founder of Nex AI.
Disclaimer: This podcast is taken from CNA.

In Made in SG, Melanie Oliveiro interviews Singaporeans working in the artificial intelligence space to explore how they are shaping and mentoring the next generation of AI-driven content creators. Jayce Tham, co-founder of media agency CreativesAtWork and generative AI content studio Dear.AI, shares how professionals in Singapore can use generative AI to enhance storytelling, content marketing, and production processes. Filmmaker, influencer, and Dear.AI Creative Director Jaze Phua discusses how AI fuels creative expression, enabling content creators to blend humour, narrative, and pop culture to produce highly shareable, viral content.
Disclaimer: This Podcast is taken from CNA.

During the daily market analysis segment on Open For Business, hosts Andrea Heng and Genevieve Woo engage in a detailed discussion with Mel Siew, who serves as the Portfolio Manager for Asia Public Credit at Muzinich & Co., covering insights, trends, and key developments impacting financial markets across the region.
Disclaimer: This Podcast is taken from CNA.

Authorities are alerting the public to a new scam that uses fake digital identity cards. Could our tendency to casually share NRIC or passport scans via messages or email be making it easier for scammers? Daniel Martin discusses this with Matthias Yeo, CEO of CyberXCenter, a company dedicated to strengthening cybersecurity in Singapore.
Disclaimer: This Podcast is taken from CNA.









