The Next Wave – Meta Replacing Creators? + Sam Altman’s Mistake & 3 Big AI Updates

Tune in on your favorite platform below. Subscriberate it and share it with another entrepreneur who needs to hear this!

Subscribe​

Welcome back, friends! I’m Joe Fier, and on this episode of The Next Wave, Matt and I got together to talk about the wild and fast-moving world of AI. So much has happened this week, and we didn’t want you to miss out on the biggest updates. We talked about how top AI chatbots like ChatGPT and Claude are getting way better at making interactive visuals, what Canva’s new tool means for anyone who works with graphics, what Sam Altman from OpenAI and Jensen Huang from NVIDIA believe about the future of intelligence, and why Meta just bought a social network run entirely by AI agents. We wrapped with a peek at some new robots and even a quick laugh at Alexa’s new adult mode.

If you want a simple, clear breakdown of what’s moving and shaking in AI right now (without fancy tech words), you’re in the right place.

ChatGPT and Claude Release Interactive Visual Tools

OpenAI recently rolled out a new feature for ChatGPT: you can ask it to explain math and science problems with interactive visuals and sliders. It’s a fresh way to actually see how different formulas work. For example, you can change a triangle’s side length and ChatGPT will adjust the image right as you move the slider. The only catch? These visuals are not made from scratch for any prompt. ChatGPT pulls from a set of about 70 pre-made interactive visuals (think charts for compound interest, formulas from school, and so on), so you need to find the right prompt that matches one they have ready.

Claude from Anthropic launched its own version just two days later. What makes Claude different is that it can actually create custom, working interactive graphs and visual explanations on the fly. Matt and I both tried building graphs about compound interest, baseball schedules, or timelines, and Claude gave us dynamic visuals we could click, filter, and adjust—sometimes better than ChatGPT, sometimes with glitches. Claude’s tool also works with follow-up prompts. Ask it to break down any topic and then add, “Now make an interactive diagram or chart I can experiment with directly here in chat.” That’ll often give you a responsive visual recap of the previous topic.

Key Takeaways:

  • ChatGPT’s visuals are fast but limited to pre-built templates.
  • Claude builds new visuals from scratch and can be more flexible.
  • Both platforms can help you see complex ideas in a way that’s easier to grasp.
  • Try adding words like “interactive” and “directly here in the chat” to get better visuals.

Useful Prompts:

  • “Take what you just explained about [topic] and turn it into an interactive diagram/chart that I can click/adjust/experiment with directly here in the chat.”
  • “Create an interactive explanation of [math or science concept].”

Canva’s Magic Layers—A New Way To Edit Any Image

Canva just dropped Magic Layers, and we were blown away. This new feature takes a flat image—maybe a thumbnail you made for a show, a photo, or even an infographic from another tool—and separates every piece into its own movable, editable layer. You can move yourself to the other side of the picture, change backgrounds in one click, adjust text, and reshape icons. It works with both AI-generated images and photos you took yourself.

Matt and I used Magic Layers to remix YouTube thumbnails, create new layouts for infographics, and mess around with pictures for pure fun. Magic Layers handled the text surprisingly well, even filling in letters that were behind someone’s head. Sometimes the tool merged backgrounds or missed gradient details, but for most images it just plain works.

Key Takeaways:

  • Easily remix, edit, or rebuild old images—change everything from layout to text.
  • Great for A/B testing YouTube thumbnails, fixing typos in graphics, or turning static JPEGs into editable Canva files.
  • Works best with simple backgrounds or clear images; complex gradients might need a few tries.
  • You can even make low-budget explainer videos by animating the layers.

Examples and Use Cases:

  • Make promos or ads with your old flat graphics by separating the text and icons.
  • Edit infographics you generated with AI (like Gemini or DALL-E) without starting over.
  • Turn a simple photo into a playful or instructional story by moving people or props around.

The Future of Intelligence: Sam Altman vs. Jensen Huang

OpenAI’s Sam Altman and NVIDIA’s Jensen Huang both spoke out about what intelligence means in the AI age, and their views are a wild contrast. Sam Altman said he sees intelligence becoming a utility you pay for, like electricity or water. In OpenAI’s future, you’d pay for AI use by the “token”—every time you call on their models, you’re racking up a tiny fee on your “intelligence bill.” For now, that mostly applies to developers, but Sam thinks it’s how everyone will use AI soon.

Matt made a great point: if OpenAI moves to a pure pay-as-you-go model, more people will just run AIs on their own computers or phones to avoid constant fees—sort of like buying solar panels to get off the electric grid. These “on-prem” AIs are already possible and getting better—why pay for everything in the cloud when you can just have your own?

Jensen Huang from NVIDIA gave a different take. He said that just being “smart” in the traditional sense—knowing things, solving math problems quickly, or even writing code—is becoming a commodity. AI can already do this stuff. What’s going to matter more is emotional intelligence, wisdom, and being able to “read the room.” As Jensen put it: “People who are able to see around corners are truly, truly smart… to be able to preempt problems before they show up just because you feel the vibe, and the vibe came from a combination of data analysis, first principle, life experience, wisdom, sensing other people.” He says intuition and emotional intelligence will become even more valuable than IQ.

Key Takeaways:

  • OpenAI sees a future where you pay for AI like you pay for electricity; that might push people toward running their own local models.
  • The smartest “human” quality is predicting what matters and connecting with people—intuition and EQ, not just solving equations fast.
  • As more knowledge-based work becomes automated, skills that can’t be bottled up in a computer (intuition, empathy, judgment) will be most valuable.

Meta Buys an AI Social Network—What Does That Mean?

Meta (the parent company of Facebook) recently acquired Maltbook, a viral social media network designed for AI agents, not humans. Here, bots chat with other bots, argue, post, and reply like people—but there are no actual people running their own accounts. Over 2 million AI agents registered on Maltbook in the first week, and a wave of attention followed before things cooled off.

The creators of Maltbook “vibe coded” it—meaning, they used AI tools like Cursor to help write code even though they aren’t superstar programmers. Meta swooped in and bought the company (price not announced), and brought the founders in too.

Why does this matter? Matt and I think Meta is working on a future where it doesn’t even need human creators to fill its platforms with content. If AI agents can make posts, start trends, and get real people to react or read, Meta can put ads on those pages and make money—no human writers or video creators required. Meta even launched “Meta Vibes,” a sort of TikTok feed with only AI-generated content. This could be the start of “infinite slop scrolling”—AI content for days that could fill the internet, no human required.

Another layer: with agent platforms and social networks, someday your personal AI might go online to research, buy things, or interact… and it may get influenced by other bots, ads, and posts made by AI for AI. Businesses will soon have to focus on selling not just to humans, but to the “shopping bots” of the future, feeding them information that gets them to pick specific brands or products.

Key Takeaways:

  • Meta may use Maltbook to swap out human creators for AI “content agents”—cheaper, more controllable, and always-on.
  • A new kind of marketing war is coming: “agent optimization”—how to get your business’s product to be chosen by other bots, not just people.
  • “Dead Internet Theory” is feeling more real: More and more of what we read or react to online might be made by bots for bots.

Robots That Clean and Alexa’s New Attitude

We checked out the Helix 2 Living Room Tidy Robot—a real robot that can walk around your home, tidying up, wiping the table, and putting away toys. The demo video was more funny than amazing (the “dirty room” just needed a little spray and a pillow fluff), but it’s a sign that basic home robots are actually getting close.

We laughed at the idea that soon, robots might do all sorts of jobs around the house—maybe you’ll have a mix of single-purpose bots (like Roombas that just vacuum) and more complex ones that can do it all.

Meanwhile, Alexa (from Amazon) now has a new “adult” personality mode. It can tell dirty jokes, throw in a little attitude, or even swear if you want. OpenAI canceled plans for similar “adult” chat bots in ChatGPT, choosing to stick closer to their mission.

Key Takeaways:

  • Home robots are real and getting better, but aren’t quite as handy as a human—yet.
  • People want their smart assistants to be more fun or “real,” but some companies are stepping back from adult content or risky features.
  • For all the hype, the smart use of AI and robots still has to balance fun, usefulness, and not getting too weird or creepy.

Resources and Links

Wrapping Up

AI keeps moving fast. ChatGPT and Claude are building hands-on visual tools that make math, science, history, and more actually “click.” Canva’s Magic Layers gives you the power to edit any image layer without the pain. The biggest lesson: local AI is rising—get ready to have some of your own on your phone or computer, not just in the cloud.

Meta’s buy of Maltbook is a big clue—more of what we see online may soon be powered, shaped, and shared by AI for AI, not for people. The world of robots and smart helpers is getting closer, and the next competition might be persuading bots instead of people.

The best thing you can do right now? Keep your human side sharp. Build intuition, practice empathy and emotional skills, and use these new tools to help, not just to chase trends. Whether you want to create, invest, or just stay aware, now’s the time to learn and try new things—because you never know what the bots (or tech giants) will do next.

Two Other Episodes You Should Check Out

Connect with Joe Fier

Thanks for tuning into this episode of the Hustle & Flowchart Podcast!

If the information in these conversations and interviews have helped you in your business journey, please head over to iTunes (or wherever you listen), subscribe to the show, and leave me an honest review.

Your reviews and feedback will not only help me continue to deliver great, helpful content, but it will also help me reach even more amazing entrepreneurs just like you!

Love This Episode?

Share the episode with another entrepreneur who would needs this. This is why we do what we do — create a ripple and help another! 

Get first access to the H&F podbot... no cost.

We’re about to release a fully interactive chat experience with previous guests on the podcast. Ask anything to our previous guests!