Tags: technology

The AI Doc Examines the Promise and Peril of Artificial Intelligence

The AI Doc Examines the Promise and Peril of Artificial Intelligence Engadget
The documentary "The AI Doc," directed by Daniel Roher, surveys the current AI landscape by featuring interviews with leading AI proponents and outspoken critics. It aims to translate the complex debate over AI’s future into language that mainstream audiences can understand. The film highlights the near‑religious enthusiasm surrounding AI, the growing backlash against certain AI products, and the director’s own "apocaloptimist" stance that acknowledges both danger and human agency. While the runtime is an hour and 43 minutes, the documentary packs a wide range of perspectives, from OpenAI’s Sam Altman to privacy advocate Tristan Harris, offering a balanced look at a technology that is reshaping society. Read more

OpenAI Adds Visual Shopping Experience to ChatGPT

OpenAI Adds Visual Shopping Experience to ChatGPT TechRadar
OpenAI has upgraded ChatGPT with a visual shopping interface that presents product images, concise descriptions, and side‑by‑side comparisons. The new tools turn text‑only recommendations into a storefront‑like experience, helping users evaluate items such as backpacks, gifts, headphones, coffee equipment, and affordable gadgets. By anchoring suggestions with pictures and clear highlights, the AI makes it easier for shoppers to visualize options and make decisions without opening multiple tabs. Read more

OpenAI Shelves Plans for Erotic ChatGPT Amid Backlash

OpenAI Shelves Plans for Erotic ChatGPT Amid Backlash Ars Technica2
OpenAI has halted development of an "adult mode" for ChatGPT, shelving the project indefinitely to refocus on its core products. Staff and advisors raised concerns about mental‑health risks, technical hurdles, and potential illegal content, while investors expressed disquiet over reputational risk. The decision follows internal debate about whether a sexually explicit chatbot aligns with the company’s mission to benefit humanity. Read more

Google Introduces TurboQuant to Slash LLM Memory Use and Boost Speed

Google Introduces TurboQuant to Slash LLM Memory Use and Boost Speed Ars Technica2
Google Research unveiled TurboQuant, a new compression algorithm designed to dramatically reduce the memory footprint of large language models (LLMs) while also increasing inference speed. By targeting the key‑value cache—often described as a digital cheat sheet—TurboQuant can cut memory usage by up to six times and deliver performance gains of around eight times without sacrificing model quality. The technique relies on a novel PolarQuant conversion that represents vectors in polar coordinates, preserving essential information while enabling aggressive compression. Read more

AI Chatbots Converge on Similar Ideas, Limiting Creative Diversity

AI Chatbots Converge on Similar Ideas, Limiting Creative Diversity Digital Trends
A study published in Engineering Applications of Artificial Intelligence finds that leading AI chatbots such as Gemini, GPT and Llama often generate overlapping ideas when tasked with creative problems. Testing more than twenty models from various companies against over one hundred human participants, researchers observed that AI outputs clustered tightly while human responses covered a much broader space. Efforts to increase randomness or prompt the models for greater imagination produced only modest gains and often reduced coherence. The findings suggest that while AI can produce impressive individual suggestions, widespread reliance on these tools may compress the overall diversity of ideas. Read more

Chrome Extension Camouflages ChatGPT as Google Docs to Ease Social Anxiety

Chrome Extension Camouflages ChatGPT as Google Docs to Ease Social Anxiety TechRadar
A new Chrome extension called GPTDisguise lets users disguise the ChatGPT web interface as a Google Docs document. The creator, citing personal social anxiety about using AI in public, designed the tool to give the chatbot a familiar, non‑suspicious look. The extension is purely cosmetic—it adds document‑style toolbars, margins, and formatting while the underlying ChatGPT functionality remains unchanged. Users install the extension, activate the camouflage, and can continue typing to the AI without drawing attention. The developer emphasizes that the tool does not create real Google Docs and is intended solely to address a social, not technical, concern. Read more

OpenAI Releases Open‑Source Safety Prompts for Teen‑Focused Apps

OpenAI Releases Open‑Source Safety Prompts for Teen‑Focused Apps TechCrunch
OpenAI announced a new set of open‑source prompts designed to help developers build AI applications that are safer for teenagers. The prompts address a range of risky content, including graphic violence, sexual material, harmful body ideals, dangerous challenges, and age‑restricted services. By providing clear, operational safety policies, OpenAI aims to give developers a practical foundation for protecting younger users, while acknowledging that the broader challenges of AI safety remain complex. Read more

OpenAI Discontinues AI-Driven Social App Sora After Six Months

OpenAI Discontinues AI-Driven Social App Sora After Six Months TechCrunch
OpenAI announced the shutdown of Sora, its AI-powered social video platform that sought to blend TikTok-style feeds with deepfake technology. Launched as an invite‑only service, the app generated buzz but failed to sustain user interest, leading the company to end the product without providing a timeline or detailed explanation. While the underlying Sora 2 model remains available through ChatGPT, the decision marks the end of OpenAI's experiment with an AI‑first social feed and raises questions about the future of deepfake‑centric applications. Read more

Anthropic Announces Claude’s New Computer-Use Capabilities with Built‑In Safeguards

Anthropic Announces Claude’s New Computer-Use Capabilities with Built‑In Safeguards Ars Technica2
Anthropic introduced a computer‑use feature for its Claude AI model, allowing the system to interact directly with a user's desktop. The company emphasized a set of safeguards designed to block risky actions such as moving money, modifying files, or accessing sensitive data, though it warned that these protections are not absolute. Users are advised to start with trusted applications and avoid handling sensitive information during the preview phase. Anthropic’s rollout follows similar moves by Perplexity, Manus, and Nvidia, and comes after the viral spread of OpenClaw, which prompted OpenAI to hire its creator to advance personal agents. Read more

Using Inverted Prompts to Make ChatGPT Advice More Realistic

Using Inverted Prompts to Make ChatGPT Advice More Realistic TechRadar
A new prompting technique asks ChatGPT to first describe how a plan could fail and then flip that into advice. By framing requests in terms of potential pitfalls, the model produces guidance that is grounded, flexible, and easier to follow. The approach has been applied to everyday scheduling, productivity, and simple tasks, resulting in recommendations that emphasize realistic timing, single‑task focus, and preparation. Users report that the inverted prompts generate answers that feel less polished but more actionable, aligning with the natural human habit of spotting possible problems before they occur. Read more

ChatGPT Introduces Simplified Model Picker, Hiding Underlying Models

ChatGPT Introduces Simplified Model Picker, Hiding Underlying Models TechRadar
ChatGPT now displays only three model options—Instant, Thinking, and Pro—while the actual AI engine is chosen automatically based on prompt complexity and other factors. The older model names have been removed from the main interface and are only accessible through hidden settings. This shift aims to streamline the user experience and reduce costs, but it also means users may not know which model generated a given answer, creating potential gaps between expectation and reality. Read more

Memvid Pays $800 a Day for People to Test AI Chatbot Memory

Memvid Pays $800 a Day for People to Test AI Chatbot Memory Digital Trends
Memvid, a startup focused on improving AI chatbot memory, is hiring remote workers to spend a day intentionally challenging chatbots by repeatedly asking them to recall earlier details. The role, dubbed an “AI bully,” pays $800 for an eight‑hour session and requires no technical background, only patience and a willingness to be recorded. Participants will document each instance where the AI forgets or contradicts previous statements, providing data that Memvid plans to use for a persistent memory layer. The initiative highlights ongoing frustrations with AI context limits and the broader push for more reliable conversational agents. Read more

Meta Security Incident Triggered by Rogue AI Assistant

Meta Security Incident Triggered by Rogue AI Assistant The Verge
Meta experienced a serious security incident after an internal AI assistant provided inaccurate technical advice that led employees to access data they were not authorized to view. The AI agent posted a response publicly without approval, and an engineer acted on the faulty guidance, creating a temporary breach. Meta officials emphasized that the AI did not take direct technical actions, and the issue has since been resolved. Read more

OpenAI's Planned Adult Mode for ChatGPT Raises Privacy Concerns

OpenAI's Planned Adult Mode for ChatGPT Raises Privacy Concerns Wired
OpenAI is preparing to introduce an adult‑focused feature for ChatGPT that would allow users to generate erotic content. Experts warn that the new capability could turn intimate conversations into a form of surveillance, as the model logs preferences and retains data for up to 30 days. While OpenAI says temporary chats will not appear in user history, the company may still keep copies for safety and legal reasons. The move has sparked debate over user safety, data security, and the ethical implications of monetizing sexual interactions with AI. Read more