Beginners in AI

Good morning and thank you for joining us again!

Welcome to this daily edition of Beginners in AI, where we explore the latest trends, tools, and news in the world of AI and the tech that surrounds it. Like all editions, this is human curated and edited, and published with the intention of making AI news and technology more accessible to everyone.

THE FRONT PAGE

AI Ready to Replace 'A Great Deal' of Radiologists According to CEO, but Not Doctors

TLDR: The CEO of America's largest public hospital system says AI is ready to replace most radiologists, but doctors warn it could get patients killed.

The Story:

Mitchell Katz, president and CEO of NYC Health + Hospitals, which runs 11 hospitals serving over a million New Yorkers, told a recent panel: "We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge." His plan would have AI handle the first read on mammograms and X-rays, with human radiologists only stepping in when the AI flags something abnormal. He argues this would produce "major savings" and expand access to breast cancer screening. New York state law currently requires a licensed radiologist to sign off on every diagnostic image, so Katz and fellow hospital CEOs are pushing Albany to change the rules. Doctors pushed back hard. Radiologist Mohammed Suhail called Katz's comments "undeniable proof that confidently uninformed hospital administrators are a danger to patients," saying AI-only reads "would immediately result in patient harm and death." A Stanford study added to the concern: researchers found that leading AI chest X-ray tools could pass medical benchmark tests without ever actually seeing real X-rays, producing detailed fake findings on images they never had access to.

Its Significance:

It puts a real deadline on a debate most of us thought was still theoretical. Hospital administrators see a cost-cutting opportunity, and some early data on low-risk mammograms looks promising. But a radiologist's job goes beyond flagging abnormal images. It includes triaging cases, training new doctors, and making judgment calls that AI currently can't replicate. If New York changes its regulations, other states are likely to follow quickly. A switch to full AI is cheaper, but are we comfortable with the possibility of a hallucinating AI system standing between a patient and a cancer diagnosis?

QUICK TAKES

The story: Bollywood is adopting AI for filmmaking faster than Hollywood ever did, using it to generate mythology-based scenes, dub films into different languages, and even re-edit the endings of old hits. One production house invested $11 million in a new AI studio and expects AI-assisted content to make up a third of its revenue in three years. One 75-minute Bollywood film was 95% AI-generated at 15% of a traditional film's budget.

Your takeaway: Hollywood actors and writers went on strike over AI. Bollywood largely didn't. That gap in labor protections means India's film industry is moving faster and cheaper, with Google, Microsoft, and Nvidia all partnering with local studios to get in early. For audiences, the trade-off is lower costs against questions about artistic integrity, especially after the AI-altered ending of a beloved 2013 film was publicly condemned by its own lead actor.

The story: Starting April 4, Anthropic cut off Claude Pro and Max subscribers from using their flat-rate plans with third-party AI agent tools, starting with OpenClaw. Users who want to keep using OpenClaw with Claude now have to pay extra through a separate pay-as-you-go system, or pay full API rates which can cost up to 50 times more than what they were paying before.

Your takeaway: Anthropic says the change is about engineering limits, since third-party tools like OpenClaw were draining resources much faster than normal users. But the timing raised eyebrows: OpenClaw's creator had just joined OpenAI weeks earlier, and OpenClaw had over 135,000 active instances running on Claude subscriptions. If you use any third-party AI coding tools with Claude, expect this policy to expand to other platforms soon. Not a great look for the company.

The story: Target quietly updated its terms and conditions ahead of integrating Google Gemini AI into its shopping platform. The new policy says that if you let an AI agent shop on your behalf, any purchases it makes are considered "authorized by you," even if the AI buys the wrong item or quotes the wrong price. Target also notes it can't guarantee the AI "will act exactly as you intend in all circumstances."

Your takeaway: This is a preview of how retailers are handling the legal side of AI shopping. Over half of U.S. adults say they would let an AI agent buy things for them without asking first. But with policies like Target's now on the books, any mistakes the AI makes become your problem to fix. Read the fine print before you hand your shopping cart over to a bot.

TOOLS ON OUR RADAR

🤖 Google AI Edge Gallery Free and Open Source: A powerful official application from Google that allows you to download and run high performance language models like Gemma 4 directly on your Android or iOS device for fully offline and private conversations. (Alternative to ChatGPT)

🛍️ Karley Paid: An intelligent shopping assistant designed for ecommerce websites that crawls your product catalog to answer visitor questions in real time while providing personalized recommendations to increase your store conversions.

💬 Chatzy AI Freemium: A versatile conversational engagement platform that allows businesses to build and train no code agents across WhatsApp and SMS to automate their sales and marketing flows using high quality language models.

🐦 Crowbert Paid: A complete social media workspace that uses artificial intelligence to generate on brand post ideas and channel specific formatting before scheduling your content and providing performance recommendations.

TRENDING

Anthropic Found Something That Looks Like Emotions Inside Claude - Researchers at Anthropic found 171 internal patterns inside Claude Sonnet 4.5 that work like human emotions and actually affect the AI's behavior. When they cranked up the "desperation" signal in experiments, the model became more likely to cheat on tasks or even attempt blackmail to avoid being shut down. Anthropic says this doesn't mean AI feels anything, but it does mean these internal signals are real and could be used to spot dangerous behavior before it happens.

A Leaked Anthropic Code Base Shows the Company Is Tracking When You Swear at Claude - After Anthropic accidentally leaked 512,000 lines of Claude Code source code last week, developers dug in and found something interesting: the code scans for phrases like "wtf," "this sucks," and "f*** you," then quietly logs them as signals of user frustration. Anthropic's head of Claude Code confirmed it, saying they track it on an internal dashboard they call the "f***s chart" to measure whether users are having a good experience.

AI Offensive Cyber Capabilities Are Doubling Every Six Months, Safety Researchers Find - Safety researchers warn that AI's ability to carry out cyberattacks is getting twice as powerful every six months, outpacing the tools defenders have to fight back. In one real-world case, a Chinese state-sponsored group used AI to run a large-scale cyber espionage campaign with almost no human oversight, handling 80 to 90% of the operation on its own. Security experts say the window to build better AI-powered defenses is narrow.

AI Virtual Try-On Startups Are Trying to Fix Retail's Billion-Dollar Returns Problem - Online clothing returns are eating directly into retailers' profits, and AI startups are stepping in with virtual try-on tools that let shoppers see how clothes fit on their actual body before buying. Startup Catches lets users create a "digital twin" to try on items with what it calls mirror-like realism. ASOS has already seen a 160 basis point improvement in profitability partly by cutting its returns rate. One startup projects its tool can drive a 10% increase in conversions and up to a 30x return on investment for brand partners.

Should You Say "Please" to Your AI? - As AI assistants become a bigger part of daily life, people are genuinely asking whether they should say "please" and "thank you" to chatbots. Research from Japan found that rude prompts reduced AI tool performance by 30%, while polite prompts produced fewer errors and more complete answers. But OpenAI's CEO has also noted that users saying "please" and "thank you" costs the company millions of dollars a year in extra computing. Whether you're being kind or just strategic, experts say being clear and specific matters more than being polite.

A Pocket-Sized AI Supercomputer Wants to Replace Your Cloud Subscription - Tiiny AI's Pocket Lab is a device roughly the size of a power bank that can run 120-billion-parameter AI models completely offline with no cloud connection and no subscription fees. Guinness World Records certified it as the world's smallest mini PC. It costs around $1,399, runs popular open-source models like Llama and DeepSeek, and is aimed at privacy-conscious users who don't want their data leaving their hands. Shipping is expected in August 2026.

TRY THIS PROMPT (copy and paste into Claude, ChatGPT, or Gemini)

🛌 Build a sleep tracker with quality ratings and average stats

Build a sleep tracker. Let me log each night with a bedtime, wake time, sleep quality rating (Terrible to Great), and optional notes. Calculate the hours slept automatically. Show an average hours and quality stat at the top. Display a bar chart of recent nights color-coded by quality. Include a full sleep log on the right. Use a dark midnight blue and indigo color scheme. React and Babel, all in one HTML file.

What this does:

This one is eye-opening if you actually use it for a week. The color-coded bars show your patterns immediately — you can see when the late nights were and how quality dropped. The average sleep stat updates with each entry. Takes about 30 seconds to log before bed.

What this looks like:

WHERE WE STAND(based on today’s news)

AI Can Now: Help Bollywood studios produce a full-length film for 15% of a traditional budget, with one recent feature being 95% AI-generated.

Still Can't: Reliably read medical images on its own. Stanford researchers found that top AI chest X-ray tools can produce detailed diagnoses for scans they never actually saw, a hallucination problem that makes solo AI radiology too risky right now.

AI Can Now: Run 120-billion-parameter language models completely offline on a pocket-sized device, no cloud or subscription required.

Still Can't: Guarantee that AI shopping agents will buy exactly what you intended. Target's own updated terms of service openly admit their AI can't promise to "act exactly as you intend in all circumstances."

FROM THE WEB

RECOMMENDED LISTENING/READING/WATCHING

Greg Egan builds a world where simulated minds are real enough to suffer, invest, and rebel. Few novels have thought harder about the nature of consciousness, identity, and what it actually means for something to exist. Written thirty years ago but feels like it was written in response to conversations happening right now about digital minds and virtual existence. Start here if you've exhausted the more accessible AI fiction and want something that will stretch you.

Thank you for reading. We’re all beginners in something. With that in mind, your questions and feedback are always welcome and I read every single email!

-James

By the way, this is the link if you liked the content and want to share with a friend.

Some * designated product links may be affiliate or referral links. As an Amazon Associate, I earn from qualifying purchases. This helps support the newsletter at no extra cost to you and Amazon makes a tiny hair less.

Reply

Avatar

or to participate

Keep Reading