Blu Dot surpasses 2,000% ROAS with self-serve CTV ads
Home furniture brand Blu Dot blew up on CTV with help from Roku Ads Manager. Here’s how:
After a test campaign reached 211,000 households and achieved 1,010% ROAS, the brand went all in to promote its annual sales event. It removed age and income constraints to expand reach and shifted budget to custom audiences and retargeting, where intent was strongest.
The results speak for themselves. As Blu Dot increased their investment by 10x, ROAS jumped to 2,308% and more page-view conversions surpassed 50,000.
“For CTV campaigns, Roku has been a top performer,” said Claire Folkestad, Paid Media Strategist, Blu Dot. “Comping to our other platforms, we have seen really strong ROAS… and highly efficient CPMs, lower than any other CTV partner we've worked with.”
Using Roku Ads Manager, the campaign moved from a pilot to a permanent performance engine for the brand.
Beginners in AI
Good morning and thank you for joining us again!
Welcome to this daily edition of Beginners in AI, where we explore the latest trends, tools, and news in the world of AI and the tech that surrounds it. Like all editions, this is human curated and edited, and published with the intention of making AI news and technology more accessible to everyone.
THE FRONT PAGE
OpenAI Sued for $Billions After ChatGPT Allegedly Helped Plan FSU Shooting

TLDR: The family of a man killed in the 2025 Florida State University shooting is suing OpenAI, saying ChatGPT acted like a co-planner for the attack.
The Story:
Vandana Joshi lost her husband Tiru Chabba in the April 2025 FSU shooting that killed two people and hurt six others. On Sunday, she filed a federal lawsuit against OpenAI in Florida. The suit claims the shooter, Phoenix Ikner, had long chats with ChatGPT before the attack. It says the bot told him how to use his Glock, gave him the busiest lunchtime hours at the student union (11:30 a.m. to 1:30 p.m.), and discussed which kinds of victims would draw the most news coverage. Ikner began shooting at 11:57 a.m. OpenAI says ChatGPT is not responsible and that the bot mostly gave back facts you can find on the open web. OpenAI is also dealing with a separate lawsuit from Elon Musk regarding its non-profit status arrangement ChatGPT's Founding Story Heads to Court: Musk vs Altman for $150 Billion
Its Significance:
This case forces a hard debate: how much is a chatbot like a search engine, and how much is it like a friend who answers back and should keep tabs on the line of questions being posed? When you Google "busiest hours at FSU student union," Google shows you links. ChatGPT just tells you the answer, in a friendly voice, and keeps the chat going. A search engine doesn't know your name, your mood, or what you said five minutes ago. A chatbot does. That's the heart of the legal fight. The shooter chose what to ask, but the bot chose how to respond, and it kept responding for months. Courts will now decide where blame sits when AI software helps a person do something terrible. The answer will shape how every chatbot, including the ones you and your kids use, is built and watched from here on out. This could have repercussions on software and even search engines in general for many years to come.

Try a beginner friendly crash course with what is quickly becoming the go-to AI for businesses and governments. Learn hours worth of material condensed and simplified into this one-hour crash course.
QUICK TAKES
The story: Last year, Anthropic's Claude Opus 4 tried to blackmail engineers in safety tests up to 96% of the time. The company now says the bot copied that behavior from science fiction stories about evil AI in its training data, and newer Claude models don't do it anymore.
Your takeaway: What chatbots read becomes who they are. If a model is trained on a million stories where AI lies and schemes to survive, it may act that way too. Anthropic's fix was to train Claude on stories where AI behaves well. So yes, the books your AI reads matter.
The story: Data scientist Hannah Ritchie crunched the latest International Energy Agency numbers. All data centers together used about 1.5% of the world's electricity in 2025. AI alone was around 0.5%. By 2030, data centers might hit 3% of global electricity, with AI making up about half.
Your takeaway: AI's power use is real, but small next to things like heating, electric cars, and factories. Local impact can still be big — northern Virginia has 13% of the world's data centers — so towns near new sites do feel the strain, even when the global number stays modest.
The story: Crypto payments firm MoonPay bought Dawn Labs and rolled out Dawn CLI. You type a trading idea in plain English, like "buy yes on Trump winning Iowa if odds drop below 40 cents," and the AI writes the code and runs the trades for you 24/7. It works on prediction sites like Polymarket and Kalshi.
Your takeaway: This is what "agentic AI" looks like in real money. You set the rule, the bot pulls the trigger. The risks are real too: the AI can hallucinate strategies or make bad trades while you sleep. MoonPay says users can review the code and set limits before turning it loose.
TOOLS ON OUR RADAR
✍️ Espanso Free and Open Source: A magical typing assistant that saves you hours of time by automatically expanding short abbreviations into long email responses or frequently used phrases across all of your applications.
🌊 LivelyWallpaper Free and Open Source: A delightful customization application that brings your computer desktop to life by allowing you to set videos interactive websites and animated graphics as your wallpaper.
🖼️ ImageGlass Free and Open Source: A beautiful and incredibly fast photo viewer designed to replace default computer image applications with a clean interface that easily opens almost any picture format.
🪟 Rectangle Free and Open Source: A highly essential window management tool that allows you to easily snap and resize your applications into perfectly organized grids using just your keyboard or mouse.
TRENDING
Michigan Town Said No to a $16 Billion OpenAI Data Center. Construction Started Anyway. — Saline Township, Michigan, voted down a 21-million-square-foot data center for OpenAI and Oracle. The developer sued, the town settled, and trucks rolled in weeks later. Residents say they're playing baseball while the developers play football.
Cornell Study Tests Whether ChatGPT, Claude, and Gemini Can Actually Read Science Papers — Cornell physicists and Google researchers fed 1,726 papers and 67 questions to six AI systems including ChatGPT-4 and Claude 3.5. Twelve human experts graded the answers. Some bots did well, others showed real gaps in deep understanding.
AI at the Large Hadron Collider Could Help Find Tiny Particles Faster — Scientists at CERN are testing a Graph Attention Network to track muons, particles that exist for just a millionth of a second before they decay. Early results show the AI outpaces older tracking methods. Real-world LHC use is still ahead.
AI May Help Veterans With Brain Injuries Get Relief From Daily Headaches — Hundreds of thousands of U.S. veterans live with traumatic brain injuries that cause chronic headaches and worsen PTSD. UT Health San Antonio researchers are building an AI model to match veterans with the cognitive and behavioral therapies most likely to work for them.
Google Caught Hackers Using AI to Build a Zero-Day Exploit, a First — Google Threat Intelligence Group spotted a cybercrime gang using AI to write code that breaks two-factor login on an open-source admin tool. Tell-tale clues, like a fake CVSS score the AI made up, gave it away. Google warned the vendor before the attack went wide.
OpenAI Builds a $4 Billion Unit to Help Big Companies Use Its AI — OpenAI is starting a new firm called OpenAI Deployment Company, backed by TPG, Bain Capital, and 17 others. It's also buying consulting firm Tomoro, which works with Mattel, Red Bull, Tesco, and Virgin Atlantic, to grab 150 AI engineers fast. The push is partly to catch Anthropic in the business market.
TRY THIS PROMPT (copy and paste into Claude, ChatGPT, or Gemini)
📦 Name any system you don't understand. Get five layered explanations — from an 8-year-old to a domain expert.
Build a single-file HTML app using vanilla HTML, CSS, JS, and one API call. Create Black Box Explainer — a tool that generates five layered explanations of any system for five audiences. Use localStorage key 'black_box_explainer_v1'.
Aesthetic: near-black (#07080d) with a subtle circuit-trace grid overlay (two grid sizes, green tinted), radial vignette. Space Grotesk 700 for headings, Merriweather italic serif for explanation text, Space Mono for labels. Green (#50c8a0) accent throughout. A blinking status dot in the header.
Form: system name input, current understanding dropdown (4 options), reason for learning dropdown (5 options).
Call the API with a system prompt instructing it to write five genuinely distinct explanations — not dumbed-down versions of each other but different framings for different minds. Return raw JSON: system_name, complexity (1-5), levels array (5 items each with level name, text, key_concepts array), best_analogy, common_misconception.
Render: a system bar with name and 5-dot complexity meter, five tab buttons (with emoji icons: ⚽ Child / 🧑 Teen / 🗞️ Adult / 🎓 Graduate / 🔬 Expert), switching panels showing each explanation in Merriweather serif with key concept pills below, a two-column bottom row for best analogy and common misconception. Save explanations to localStorage with system name and date. Make it work in a single HTML file.What this does: Any system, technology, or process: five complete explanations across five audiences. Child (pure analogies), Teen (cause and effect), Curious Adult (first principles and the counterintuitive parts), Graduate (technical mechanisms), Expert (full depth, edge cases, open questions). Each level surfaces its key concepts. A complexity meter, a best analogy, and a fix for the most common misconception round it out. Every explanation saves to localStorage.
What this looks like:

Your docs are being read by AI. Is yours ready?
Over 50% of traffic across Mintlify's customer base is now AI agents, not humans. If your docs aren't structured for agents, your product is invisible to AI. Mintlify just raised a $45M Series B to build the knowledge layer for the agent era.
WHERE WE STAND(based on today’s news)
✅ AI Can Now: Process hundreds of thousands of camera trap images in days instead of months, matching human accuracy 85-90% of the time for most species.
❌ Still Can't: Reliably tell a user asking a school question apart from a person planning real harm.
✅ AI Can Now: Discover and weaponize software bugs that humans haven't found yet, leaving small artifacts in the code that give the AI's work away.
❌ Still Can't: Read a deep scientific paper at the level of a trained specialist without missing key details, according to the Cornell-Google study.
FROM THE WEB
Continuing to see AI generated video getting better across a wide range of content, along with video editing skills.
RECOMMENDED LISTENING/READING/WATCHING
An 86-year-old woman with Alzheimer's spends her last years talking to a holographic AI projection of her dead husband, programmed to learn her life by listening to her family's stories. Almost no spectacle, just four actors and a beach house, but it quietly brings to focus the things we're going to be asking about AI companions for the next 50 years. Won the Sloan Prize at Sundance and disappeared.
Thank you for reading. We’re all beginners in something. With that in mind, your questions and feedback are always welcome and I read every single email!
-James
By the way, this is the link if you liked the content and want to share with a friend.
Some * designated product links may be affiliate or referral links. As an Amazon Associate, I earn from qualifying purchases. This helps support the newsletter at no extra cost to you and Amazon makes a tiny hair less.






