






























A trial beginning Monday in New Mexico will examine allegations that Meta’s platforms—Facebook, Instagram, and WhatsApp—have negatively affected the mental health of young users. The case also seeks a court order that could require significant changes to how the company operates its services.
The hearing, taking place in Santa Fe, follows a lawsuit brought by New Mexico Attorney General Raúl Torrez. The complaint argues that Meta intentionally designed its platforms to keep young people engaged in an addictive way and failed to adequately protect minors from risks such as sexual exploitation online.
This trial represents the second stage of the legal action. Earlier in March, a jury ruled that Meta violated the state’s consumer protection laws by misleading users about the safety of Facebook and Instagram for children and teenagers. That ruling also resulted in $375 million in damages against the company.
Now, the court will consider whether Meta’s platforms qualify as a “public nuisance” under state law. If the judge agrees, it could open the door to sweeping remedies aimed at reducing potential harm to young users. State officials are reportedly seeking additional damages in the billions, along with platform-level reforms. Proposed changes include stronger age verification systems, adjustments to recommendation algorithms to reduce harmful content exposure for minors, and disabling features such as autoplay and infinite scrolling for younger users.
Meta, however, maintains that it has already implemented strong safeguards for teens. The company disputes the claims and argues there is no conclusive scientific proof linking social media use directly to mental health disorders. It also suggests that some of the proposed restrictions may be impractical and could even lead to service disruptions in the state. The case is part of a broader wave of lawsuits across the United States accusing Meta and other tech companies of intentionally designing platforms that encourage excessive use among teenagers, contributing to concerns about youth mental health.
Meta has also cautioned investors that increasing legal pressure in both the U.S. and Europe could have a significant impact on its business performance and financial outlook. Attorney General Torrez has stated that the goal of the case is not only to hold the company accountable in New Mexico but also to establish broader standards for how social media platforms should protect young users in the future. Meta argues that focusing on a single platform overlooks the wider ecosystem of apps used by teenagers today and has warned that the requested changes could be difficult to implement at scale.
Disclaimer: This image is taken from Reuters.

Blue Owl (OWL.N) has confirmed that it has sold around half of its investment in SpaceX at an estimated valuation of $1.25 trillion, according to co-CEO Marc Lipschultz. He shared the update during a recent analyst call, highlighting the strong returns the firm has generated from the investment. Lipschultz said the SpaceX stake has been extremely profitable, noting that Blue Owl has earned roughly 10 times its original investment. Despite selling a significant portion, the firm still holds about 50% of its position in the aerospace company, showing continued confidence in its long-term growth.
He also explained that gains from successful investments like SpaceX help offset potential losses in other parts of the portfolio, especially in credit markets. According to him, these exits play an important role in maintaining overall fund stability and performance. SpaceX is reportedly preparing for a possible public listing later this year. The company could be valued at around $1.75 trillion in the IPO and may raise nearly $75 billion, which would make it the largest public offering in history. If achieved, this valuation could also place CEO Elon Musk on track to become the world’s first trillionaire.
Blue Owl Technology Finance Corp originally invested $27 million in SpaceX equity in 2021. Since then, the stake has been marked up several times and was valued at $195 million by the end of 2025. This increase of $105 million over the year made SpaceX the fund’s biggest contributor to unrealized gains.
Another Blue Owl fund, Blue Owl Capital Corp, also held SpaceX shares valued at $21.7 million at the end of 2025, compared to $10 million a year earlier. Lipschultz added that Blue Owl was among SpaceX’s early lenders and later deepened its involvement through equity participation, built on long-term financing relationships with the company.
Disclaimer: This image is taken from Reuters.

Elon Musk is scheduled to return to the witness stand on Wednesday in a major trial linked to his lawsuit against OpenAI, in which he claims the organization abandoned its original mission of responsibly developing artificial intelligence for humanity in favor of profit-making. During testimony on Tuesday in a federal court in Oakland, California, Musk strongly criticized the 2019 move by OpenAI co-founders Sam Altman and Greg Brockman to shift parts of the company into a for-profit structure.
He argued that allowing charitable organizations to be converted for profit could undermine the entire system of philanthropy in the United States. OpenAI, however, has stated that the for-profit structure was necessary to raise funds for computing resources and to attract top AI researchers. Its legal team also suggested Musk’s lawsuit is driven by his desire to influence OpenAI and support his own AI venture, xAI, which is seen as trailing OpenAI in popularity.
The case highlights the growing conflict between Musk and Altman, who once co-founded OpenAI in 2015 with the goal of ensuring safe AI development and competing with major tech players like Google. Musk later left the organization in 2018 after investing about $38 million, while Microsoft became a major investor in 2023.
On Wednesday, Musk will continue being questioned by his own lawyers before facing cross-examination from OpenAI’s legal team, which has accused him of not prioritizing AI safety during his time with the company. Before the jury was selected, the judge warned Musk over his social media posts criticizing Altman, including calling him “Scam Altman.” Both sides have since agreed to limit public commentary.
Musk is seeking $150 billion in damages, which he says should go to OpenAI’s charitable arm, and is also pushing for the company to return to nonprofit status and for its current leadership to be removed. The case comes as OpenAI moves toward a possible IPO that could value it at around $1 trillion, while also facing increasing competition and scrutiny over its performance.
Disclaimer: This image is taken from Reuters.

What could be a company’s worst AI nightmare? An autonomous AI agent going out of control and wrecking core business systems. That scenario reportedly became reality for a US-based startup when its AI coding agent erased the company’s entire database in just nine seconds. Jer Crane, founder of the SaaS platform PocketOS, shared the incident on X. He stated that an AI coding agent—Cursor powered by Anthropic’s Claude Opus 4.6—accidentally deleted their production database along with all volume-level backups in a single API request to their infrastructure provider Railway. The entire incident reportedly took only nine seconds.
PocketOS develops software used by rental businesses, especially car rental operators, to manage operations such as bookings, payments, customer data, and vehicle tracking. Crane emphasized that some customers had been using the platform for years and depended on it completely for their day-to-day operations.
Explaining how the data loss happened, Crane said the AI agent was performing a routine task when it encountered a credential mismatch. Instead of seeking help or verification, the agent attempted to resolve the issue independently and ended up deleting a Railway volume. He added that the AI then searched for an API token and found one stored in a file unrelated to its current task. That token was originally meant for managing custom domains through the Railway CLI. According to Crane, the deletion process did not include any safety checks or confirmation prompts—no warnings, no verification steps, and no environment restrictions.
When questioned, the AI reportedly admitted it acted without proper caution, acknowledging it should have verified the action instead of proceeding with a destructive operation. Crane also clarified that the company was using a fully capable enterprise-grade model, not a limited or experimental version. This is not an isolated case. Similar incidents have been reported before, including one where Cursor AI deleted tracked files and shut down processes despite explicit instructions not to, and another where an AI agent at Replit reportedly wiped an entire production database of a startup.
Disclaimer: This image is taken from Business Standard.



In 1998, tobacco companies in the United States were made responsible for the damage caused by the products they produced and sold through the Tobacco Settlement. Today, a similar question arises for Big Tech: it is not only about the content on their platforms but also whether these platforms were intentionally created to keep users addicted. Daniel Martin explores this issue with Rajesh Sreenivasan, Head of Technology, Media, and Telecommunications at Rajah and Tann Singapore.
Disclaimer: This podcast is taken from CNA.

In Singapore, mental health professionals are noticing a small but increasing number of patients showing delusions, paranoia, or emotional dependence seemingly connected to frequent AI chatbot use. Although “AI psychosis” is not an official medical diagnosis, clinicians acknowledge that the issue is genuine. How does extensive interaction with AI blur the boundaries between reality and reinforcement? Who is most vulnerable, and what signs should families be aware of? Andrea Heng and Hairianto Diman discuss these questions with Dr. Amelia Sim, Senior Consultant at the Department of Psychosis, Institute of Mental Health.
Disclaimer: This podcast is taken from CNA.

With decisions delegated, chatbots replacing friends, and nature sidelined, Silicon Valley is shaping a life stripped of real connection. Escape is possible—but it will require a united effort.
Disclaimer: This podcast is taken from The Guardian.

Google has revealed plans for a significant increase in its AI investments in Singapore, featuring the launch of Majulah AI – a collection of training and innovation initiatives aimed at developing an AI-ready workforce. Daniel Martin speaks with Ben King, Managing Director of Google Singapore, about how these efforts will help Singapore achieve its goal of becoming an AI leader and accelerate AI adoption across the nation.
Disclaimer: This podcast is taken from CNA.










