Latest News
View All
Must See
View All
/
Technology
Tue, 07 Apr 2026
Direct-to-consumer (D2C) startups are increasingly adopting artificial intelligence (AI) solutions to improve last-mile delivery, especially in Tier-II and smaller cities. By using tools such as AI-driven voice calls, automated order and address verification, and converting cash-on-delivery (COD) orders to prepaid, these companies have achieved an 11 percent increase in delivery completion rates, according to data from Velocity Shipping. Abhiroop Medhekar, co-founder and CEO of Velocity, explained that logistics inefficiencies are a major drain on profitability for digital-first brands. While demand from Tier-II and Tier-III markets has grown rapidly, delivery reliability continues to face challenges due to higher last-mile costs, limited network reach, and operational complexities. He noted that early, AI-powered interventions in processes like order verification, risk assessment, and delivery workflows significantly enhance delivery performance. Although non-metro regions are becoming major growth drivers for e-commerce, ongoing last-mile issues often result in reverse logistics, increasing costs for companies. Failed deliveries and return-to-origin (RTO) orders can contribute to 25–30 percent of revenue losses during peak festive seasons. A Bain and Company report highlights that three out of five new online shoppers since 2020 are from smaller cities, and nearly 60 percent of new sellers since 2021 are located outside Tier-I markets. These regions account for over 67 percent of total shipments, but only around 60 percent are successfully delivered, compared to a 73 percent success rate in metro areas. The gap is mainly due to structural challenges such as inconsistent address formats, limited courier networks, larger delivery zones, and a high proportion of COD orders, which increase the likelihood of cancellations and failed deliveries. India’s e-commerce market is expected to expand from 70–80 billion dollars in 2024 to 180–200 billion dollars by 2030, with D2C channels projected to grow nearly three times faster than traditional marketplaces. Disclaimer: This image is taken from Business Standard.
/
Featured Videos
View All
Featured Articles
View All
/
Opinions
View All
/
Author
Google has enhanced Gemini by adding new mental health support capabilities along with improved safety features.

Google is introducing new updates to its Gemini AI aimed at helping users access mental health support more quickly. The company says these changes will enable the system to recognize signs of emotional distress and direct users to trusted resources, such as crisis helplines. In addition, Google has announced new funding to strengthen mental health support services worldwide.

Gemini will now make it easier for users to find help during difficult moments. If a conversation indicates that someone may be struggling, the chatbot will display a “Help is available” message along with links to relevant support services. In more critical situations, including signs of self-harm or suicidal thoughts, Gemini will provide a one-tap option to contact crisis helplines. Users can call, text, or chat with support services directly, and this option will remain visible throughout the conversation for easy access.

Google’s philanthropic division, Google.org, has committed $30 million over the next three years to help crisis helplines expand their reach and improve response capabilities. The company is also strengthening its partnership with ReflexAI by providing $4 million in funding and integrating Gemini into its training tools. This effort will include technical support to enhance training systems used by organizations that handle sensitive conversations.

Google is refining how Gemini responds to mental health-related topics. The system is designed to guide users toward real-world assistance rather than acting as a substitute for professional care. It is also programmed to avoid promoting harmful actions or reinforcing misinformation, instead encouraging users to seek reliable information and professional support when needed.

The update includes additional safeguards specifically for younger users. These measures aim to ensure that responses remain appropriate and avoid harmful or sensitive content. Gemini is restricted from presenting itself as a human or forming companion-like interactions. It is also designed to prevent encouragement of bullying, harassment, or other harmful behavior. Google noted that these safety measures will continue to evolve as the system improves. Overall, the update reflects the company’s broader efforts in the mental health space, emphasizing that while AI can help improve access to support, it is not a replacement for professional care.
Disclaimer: This image is taken from Google.

Technology
Wed, 08 Apr 2026
/
Author
Karnataka government and NIMHANS draft policy to curb unsafe digital use among students.

Karnataka’s Department of Health & Family Welfare, together with NIMHANS and other stakeholders, has drafted a policy to tackle excessive and unsafe digital technology use among students. With nearly one in four adolescents showing problematic internet use, the policy acknowledges the rising mental health concerns linked to excessive screen time, including anxiety, sleep issues, poor academic performance, and social isolation, along with increased exposure to cyber risks like cyberbullying, grooming, and online exploitation.

The policy aims to promote digital well-being, emotional resilience, and responsible technology use through a structured, school-based framework. It emphasizes prevention, early identification, and management by integrating digital literacy, mental health awareness, and cyber safety into schools. A multi-stakeholder approach involves schools, teachers, parents, students, and government systems.

Schools are directed to conduct teacher training programs on healthy technology use and maintain proper communication with parents. Digital wellness will be embedded in life skills and ICT education, covering social media literacy, cyber safety, mental health impacts, and ethical use of technology. Each school will set screen-time norms (≤1 hour per day recreational use), address cyber misconduct, provide counselling, and train teachers to identify behavioural or academic red flags with clear referral pathways to mental health services. School-level bodies will oversee implementation, awareness, and incident management, alongside regular sensitization programs for students, teachers, and parents.

The policy encourages physical activity, hobbies, and tech-free periods for balanced development, and includes mechanisms to track digital distress, handle cyber incidents, and access support services such as Tele-MANAS (14416). A Training of Trainers (ToT) model will equip teachers to understand technology addiction (5C model: Craving, Control, Compulsion, Coping, Consequences), identify early warning signs, and implement classroom and peer-led interventions. Parents are recognized as key stakeholders, encouraged to enforce screen-time rules, create device-free zones, promote offline family engagement, and model responsible digital behaviour, supported by guidance from schools.

The draft policy defines clear roles and responsibilities: students practice responsible digital use and seek help when needed; teachers integrate digital wellness and monitor well-being; parents supervise technology use; schools implement policies and support systems; and the government provides guidelines, funding, and oversight.

The policy aims to improve digital literacy, encourage responsible technology use, reduce technology addiction and related mental health issues, enable early detection of mental health concerns, and strengthen school-parent collaboration. It represents a proactive, scalable approach to fostering a safe, balanced, and resilient digital environment for students.
Disclaimer: This image is taken from ANI.

Technology
Wed, 01 Apr 2026
/
Author
DroneYards supplies an indigenously developed FPV drone to the Indian Army.

Ghaziabad-based DroneYards Aerial Solutions has boosted the Indian Army’s operational capabilities by delivering over 200 advanced First-Person View (FPV) drones and training more than 350 soldiers in their use within just three months. This rapid deployment highlights the growing importance of indigenous technology in strengthening India’s defence preparedness.

The drones are fully developed in India and deliberately exclude Chinese components, ensuring greater security and self-reliance. Built for modern combat environments, they are equipped with Electronic Warfare (EW) capabilities, secure telemetry systems, and an extended operational range. These features allow troops to conduct surveillance, reconnaissance, and tactical missions more efficiently, even in challenging and hostile conditions.

To ensure effective utilization, DroneYards conducted intensive training programs at the Manipur-Assam border and other strategically sensitive areas. Soldiers were trained in real-time operations, enabling them to seamlessly integrate the drones into mission-critical tasks. A key feature of these drones is their triple radio redundancy, which enhances communication reliability and ensures continued operation even if one or more channels are compromised.

This initiative aligns with the broader ‘Make in India’ vision aimed at modernizing the armed forces through locally manufactured, secure technologies. DroneYards, along with other Indian firms like InsideFPV and DroneAcharya, is contributing significantly to reducing dependence on foreign defence systems while promoting innovation within the country.

The company’s efforts have already received recognition, with its drone platforms showcased by the Western and South Western Commands of the Indian Army. Overall, DroneYards’ contribution reflects the strategic importance of home-grown defence solutions in equipping India’s military to meet evolving security challenges and operate effectively in contested environments.
Disclaimer: This image is taken from Indian Defence News.

Technology
Tue, 31 Mar 2026
/
Author
Austria intends to prohibit children under 14 from using social media.

Austria’s conservative-led, three-party government plans to ban social media use for children under 14, officials announced on Friday. Members of the cabinet from the ruling parties agreed on the principle of the ban, which aims to protect children from addictive algorithms and harmful content, including sexual abuse. However, the government has not specified when the ban will take effect or finalized how it will be implemented.

Vice Chancellor Andreas Babler of the Social Democrats emphasized the urgency of the measure, saying the government will “decisively protect children and young people from the negative effects of social media.” He added, “We can no longer stand by while these platforms make our children addicted and often unwell. The risks linked to this usage were ignored for too long, and now it is time to act.”

Austria would join a growing number of countries considering restrictions on underage social media use. Australia became the first nation to enforce a ban for under-16s in December. France’s lower house of parliament approved a similar measure for under-15s in January, and other countries are exploring comparable rules.

Babler and Alexander Proell, the conservative junior minister for digitization, said draft legislation for Austria’s ban is expected by the end of June. Instead of naming individual platforms, the government plans to apply the ban based on how addictive a platform’s algorithms are and whether it contains content such as sexualized violence.

The initiative reflects concerns about children’s mental health and exposure to harmful content online. By focusing on the design of platforms and the nature of content rather than specific apps, Austrian authorities aim to create a flexible framework that addresses the evolving digital landscape and protects minors from potential risks.
Disclaimer: This image is taken from Reuters.

Technology
Sat, 28 Mar 2026
Featured Images
View All

Juries in the first U.S. trials over social media’s impact on children found Meta and Google liable, awarding $6 million and $375 million in separate cases. The plaintiffs argued the companies’ platform designs, not user content, caused harm, challenging Section 230’s legal protections. Both Meta and Google plan to appeal, which could reshape how U.S. law shields tech firms and affect lawsuits against other online platforms.

Disclaimer: This image is taken from Reuters.

Technology
Thu, 26 Mar 2026
news-image
Advertisement 1
Advertisement 1
Podcasts
View All
/
Tanvi Kapoor
TalkBack Big Tech versus Big Tobacco Are We Repeating History

In 1998, tobacco companies in the United States were made responsible for the damage caused by the products they produced and sold through the Tobacco Settlement. Today, a similar question arises for Big Tech: it is not only about the content on their platforms but also whether these platforms were intentionally created to keep users addicted. Daniel Martin explores this issue with Rajesh Sreenivasan, Head of Technology, Media, and Telecommunications at Rajah and Tann Singapore.

Disclaimer: This podcast is taken from CNA.

Technology
Sat, 28 Mar 2026
/
Ishani Kulkarni
The mental health dilemma of AI: Supportive tool or emerging risk? A look into 'AI Psychosis'

In Singapore, mental health professionals are noticing a small but increasing number of patients showing delusions, paranoia, or emotional dependence seemingly connected to frequent AI chatbot use. Although “AI psychosis” is not an official medical diagnosis, clinicians acknowledge that the issue is genuine. How does extensive interaction with AI blur the boundaries between reality and reinforcement? Who is most vulnerable, and what signs should families be aware of? Andrea Heng and Hairianto Diman discuss these questions with Dr. Amelia Sim, Senior Consultant at the Department of Psychosis, Institute of Mental Health.

Disclaimer: This podcast is taken from CNA.

Technology
Thu, 12 Mar 2026
/
Priya Iyer
How technology drains us-and ways to reclaim our control

With decisions delegated, chatbots replacing friends, and nature sidelined, Silicon Valley is shaping a life stripped of real connection. Escape is possible—but it will require a united effort.

Disclaimer: This podcast is taken from The Guardian.

Technology
Mon, 16 Feb 2026
/
Aravind Pillai
Majulah AI and Google's expanding AI investments continue to grow in Singapore.

Google has revealed plans for a significant increase in its AI investments in Singapore, featuring the launch of Majulah AI – a collection of training and innovation initiatives aimed at developing an AI-ready workforce. Daniel Martin speaks with Ben King, Managing Director of Google Singapore, about how these efforts will help Singapore achieve its goal of becoming an AI leader and accelerate AI adoption across the nation.

Disclaimer: This podcast is taken from CNA.

Technology
Wed, 11 Feb 2026