AI, automation, and data privacy are reshaping societies at breakneck speed. While they offer incredible potential, they also raise questions about jobs, security, and ethics that no country can ignore. Here are relevant stories from YouTube, Facebook, X, TikTok, Reddit and SubStack.
YouTube
-
This is interesting as a first large diffusion-based LLM. Most of the LLMs you've been seeing are ~clones as far as the core modeling approach goes.
They're all trained "autoregressively", i.e. predicting tokens from left to right. Diffusion is different - it doesn't go left to right, but all at once. You start with noise and gradually denoise into a token stream.
Most of the image / video generation AI tools actually work this way and use Diffusion, not Autoregression. It's only text (and sometimes audio!) that have resisted.
So it's been a bit of a mystery to me and many others why, for some reason, text prefers Autoregression, but images/videos prefer Diffusion. This turns out to be a fairly deep rabbit hole that has to do with the distribution of information and noise and our own perception of them, in these domains. If you look close enough, a lot of interesting connections emerge between the two as well.
All that to say that this model has the potential to be different, and possibly showcase new, unique psychology, or new strengths and weaknesses. I encourage people to try it out! Read the entire post
-
NEW AI FOR HUMANS! -
's new Alexa+ is better than ol' Alexa Minus -
's Claude Sonnet 3.7 is VERY fun -
's Flight Sim shows how AI is changing games -
's Incoming GPT 4.5 AND...
's Uncensored Mode dates our AI Friend Gash. Links below!Description text goes here. Link to post here.
-
Welcome to today's Robotics and Autonomous Systems research roundup. Here, we spotlight the most groundbreaking papers shaping the future of intelligent machines and autonomous systems.
Contents
Let's Get to the Point: LLM-Supported Planning, Drafting, and Revising of Research-Paper Blog Posts Read the Entire Article
Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning
ProxyLLM : LLM-Driven Framework for Customer Support Through Text-Style Transfer
Decictor: Towards Evaluating the Robustness of Decision-Making in Autonomous Driving Systems
Understanding Emotional Body Expressions via Large Language Models
Bots against Bias: Critical Next Steps for Human-Robot Interaction
-
Welcome to today's Robotics and Autonomous Systems research roundup. Here, we spotlight the most groundbreaking papers shaping the future of intelligent machines and autonomous systems.
Contents
Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning
ProxyLLM : LLM-Driven Framework for Customer Support Through Text-Style Transfer
Decictor: Towards Evaluating the Robustness of Decision-Making in Autonomous Driving Systems
Understanding Emotional Body Expressions via Large Language Models
Bots against Bias: Critical Next Steps for Human-Robot Interaction
-
MY MIND HAS JUST BEEN BOGGLED. I just tried the early release of OpenAI's new DeepResearch feature (rolling out later today to the $200/month Pro users). I've been working in the CRM software industry for 30+ years. It's not just that I've had court-side seats to the game, I've been on the court, doing my best to play the game. First in vertical CRM (my first startup) and now as co-founder of HubSpot. I've had some modest success and I feel like I have a pretty good handle on things in the industry. That's why OpenAI's new DeepResearch feature boggled my mind. I asked it create a detailed research report including competitive analysis, positioning, growth, product strategy and AI vision for the industry. What it produced was an 11,000 word report. With data. And citations. And tables. And genuinely great insights -- including some I hadn't really thought of before. Read the entire post here.
Mark Zuckerberg: “This will be a defining year for AI. In 2025, I expect Meta AI will be the leading assistant serving more than 1 billion people, Llama 4 will become the leading state-of-the-art model, and we'll build an AI engineer that will start contributing increasing amounts of code to our R&D efforts. To power this, Meta is building a 2GW+ datacenter that is so large it would cover a significant part of Manhattan.” Read the entire post