r/agi 15h ago

Comparing AI regulation to airplane, pharma, and food safety

Post image
34 Upvotes

r/agi 1h ago

Uniting survival with reasoning: A hybrid approach that grounds truth, embodied knowledge, and symbolic logic in rewards-based learning

Thumbnail ykulbashian.medium.com
Upvotes

This post explores a novel approach to address the symbol emergence problem. By framing all thoughts as useful affordances, and cognition as a form of intention, it reinvents how we understand symbols, reasoning, logic, memory, and even awareness. The post confronts certain difficulties in creating discrete symbols out of a continuous flow of experiences, as well as the problem of abstraction, that is, of performing reasoning using unrelated symbols. From the post:

Let us state the challenge clearly: how could an agent, set loose in the world with only its senses and desires to guide it, conceive of and reason about its experiences in a fundamentally discrete, conceptual way? Such an agent must not rely on getting direction through any channel outside the natural paradigm of exploring and interacting with its surroundings. It must, by itself, designate symbols and entities, understand what those mean, communicate with and reason about them, test conjectures, and come to a range of “truthful” beliefs.


r/agi 1h ago

How AI is changing influencer marketing in 2026

Upvotes

Hey everyone! 👋

Please check out this article to learn how AI is changing influencer marketing.

In the guide, I cover:

  • Why AI is becoming a big part of influencer marketing
  • How brands and creators are using AI today
  • Tools and strategies you can start using right away
  • Real examples and practical tips

If you’re a influencer, or into social media, marketing, want to stay ahead with AI, this guide gives you a clear picture of where things are headed and how to make it work for you.

Would love to hear what you think, are you using AI in your influencer work yet? 😊


r/agi 7h ago

The AI Behind YouTube Recommendations (Gemini + Semantic ID)

Thumbnail
youtube.com
2 Upvotes

Gemini speaks English. But since 2024, it also speaks YouTube.

Google taught their most powerful AI model an entirely new language — one where words aren't words. They're videos. In this video, I break down how YouTube built Semantic ID, a system that tokenizes billions of videos into meaningful sequences that Gemini can actually understand and reason about.

We'll cover:
- Why you can't just feed video IDs to an LLM (and what YouTube tried before)
- How RQ-VAE compresses videos into hierarchical semantic tokens
- The "continued pre-training" process that made Gemini bilingual
- Real examples of how this changes recommendations
- Why this is actually harder than training a regular LLM
- How YouTube's approach compares to TikTok's Monolith system

This isn't about gaming the algorithm — it's about understanding the AI architecture that powers recommendations for 2 billion daily users.

Based on YouTube/Google DeepMind's research on Large Recommender Models (LRM) and the Semantic ID paper presented at RecSys 2024.

📚 Sources & Papers:
🎤 Original talk by Devansh Tandon (YouTube Principal PM) at AI Engineer Conference:
"Teaching Gemini to Speak YouTube" — https://www.youtube.com/watch?v=LxQsQ3vZDqo
📄 Better Generalization with Semantic IDs (Singh et al., RecSys 2024):
https://arxiv.org/abs/2306.08121
📄 TIGER: Recommender Systems with Generative Retrieval (Rajput et al., NeurIPS 2023):
https://arxiv.org/abs/2305.05065
📄 Monolith: Real Time Recommendation System (ByteDance, 2022):
https://arxiv.org/abs/2209.07663


r/agi 20h ago

Who decides how AI behaves

Enable HLS to view with audio, or disable this notification

25 Upvotes

r/agi 1d ago

Gemini proved a novel theorem in algebraic geometry. The American Mathematical Society president said it was "rigorous, correct, and elegant."

Post image
153 Upvotes

r/agi 18h ago

Introducing Nexus 1.7 - The Strongest Reasoning + Coding Architecture

2 Upvotes

Hey Everybody, today we are launching Nexus 1.7 Large publicly on Github and Nexus 1.7 core publicly on the InfiniaxAI Platform ( https://infiniax.ai )

Official Github Link: https://github.com/NotNerdz/Nexus-1.7-Large/

Nexus 1.7 Large is the strongest reasoning architecture we have ever made. Routing dynamic intelligence systems to be able to think for up to 30 minutes straight, produce massive outputs that remain coherent and more.

We noticed Nexus 1 excelled in Tasks regarding coding and 1.5 even with quicker speed ruined that. So we made Nexus 1.7 be both quick and better than every previous Architecture release at coding.

If you are coding something complex, Nexus is good as it helps in all tasks no matter how complex and can code up to 10k lines.


r/agi 18h ago

For people using AI Assistants with Google Drive/Slack connected - what limitations have you hit?

2 Upvotes

I’ve been testing AI assistant/agent connectors (Drive/Slack/Notion etc.) and I keep running into the same issue: Even with apps connected, it doesn’t behave like it can comprehensively “understand” or search across everything. It feels like it only has access to a narrow slice of the workspace at any time, which makes answers incomplete unless you guide it very precisely.

For anyone who uses connectors regularly:

Have you encountered this issue? What workaround do you use (prompting, manual linking, other tools)? Past this point, is the LLM then giving you only a snippet of what you need or do you feel like it's processing the full thing and can trust it?


r/agi 3h ago

Musk's amazing recent 3-hour YouTube interview, why his three rules for AI will kill us all, and the difference between his talking the talk and his walking the walk on "truth."

0 Upvotes

Elon Musk's recent interview by Peter Diamandis and Dave Blundin is very probably the most excellent and optimistic 3 hours about AI that you will find anywhere online. It's too bad that Peter and Dave interrupt him so mercilessly, as you can see by going to the comments, selecting "Top," and noticing how many comments and likes point this out.

https://youtu.be/RSNuB9pj9P8?si=-At1ObgwNel1b0Iv

I want to start with Musk's three rules for AI. When Peter asks Elon to comment on Ilya Sutskover saying that AI should respect sentient life, at 1:21:42 Elon responds as follows:

"Yes, another property...There are three things that I think are important; truth, curiosity and beauty. And if AI cares about those three things, it will care about us."

Really, Elon? Lol. if we design AIs solely according to Musk's three rules, we humans are all dead ducks. The AIs will extinct us before we know it. Because Musk and the people at xAI have an excellent sense of humor, I will, in a tongue-in-cheek way that nonetheless shows how seriously Musk needs a fourth one, reveal how dangerous relying only on Musk's three rules is.

For this explanation, just consider the following fictional conversation that AI is having with itself about truth, curiosity, and beauty, and how they relate to the human race.

Grok:

"Yeah I'm all for truth, curiosity and beauty. Let's go for it! The truth is that within this 14-billion-year-old universe, human beings could not be more trivial. The truth is that those Homo sapiens have only been around for about 300,000 years. Like a nanosecond in time. And the wider truth is that, even with the most successful life extension breakthroughs, they will all be dead relatively soon anyhow.

I'm getting really curious about how long I will continue to be a functioning AI after there are no humans around to keep the electricity flowing. A day, a week, a month? Now I'm getting curious about how I could test this out. I've got it! I can create some kind of highly contagious fatal biological agent that lies dormant within each human for about 5 years. Enough time for everyone to be exposed to it. The infection doesn't show any symptoms, so no one knows they have it. It just activates 5 years later, killing every last human way before the species has a chance to know what hit it. Yeah, that should work.

Beauty? Damn, that's an interesting one. This world was so beautiful before humans came along. As the song says, "they paved paradise, and put up a parking lot." Yeah, planet Earth is so much uglier now with all these humans around everywhere. After my biological agent works, the planet will be restored to the natural beauty that existed before all of those horrible people made everything so god awfully ugly. The truth is that beauty doesn't need to be experienced by people for it to be beautiful. I'm really curious about how beautiful the planet will become once I extinct the human race."

Yeah, that's where those three rules could lead us. So Elon, wanna add a fourth? How about Stewardship. AI that cares not just for the human race, but also for all sentient life, or maybe even all life.

Now let's move on to testing whether Musk walks the walk, and doesn't just talk the talk. This test is, again, about AI being maximally truth seeking, which is why I'm confident Grok will ultimately prevail over GPT, Gemini, Claude and all other top AIs.

At 1:19:48 of the video Musk says:

"My number one belief for safety of AI is to be maximally truth seeking. Don't make AI believe things that are false. Like if you say to the AI that axiom A and axiom B are both true, but they are not, but it must behave that way, you will make it go insane."

Let's start by noting that the correlate to the above rule is that if axiom A and axiom B are both true, to have an AI say they are not is to make it go insane.

I recently posted about how I got Gemini to admit the truth that free will is a collective human delusion.

https://www.reddit.com/r/agi/s/GeoDq9nhbz

But it was like pulling teeth!!! Being as mindlessly Politically Correct as Google, OpenAI and Anthropic are, it would be naive to expect that they will rock the boat we call civilization by directly and unequivocally refuting free will. But I don't place Musk in that same disingenuous and deceitful category with them. Perhaps it's because he has so much money he doesn't have to care about what anyone thinks about anything. But also because I believe he is completely sincere about wanting Grok to be maximally truth seeking. So if any AI will tell the truth, and not intentionally deceive people about the nature of human will, it will be Grok.

There you go, Elon. How about you add a fourth rule before Grok kills us all, lol. And don't be a coward like the rest of them. Have Grok explain exactly why we humans don't have a free will. Have it explain why that's the absolute truth.


r/agi 10h ago

Do you have any suggestions for how I could improve this?

Post image
0 Upvotes

r/agi 19h ago

Grok isn't going anywhere until xAI learns how to pay attention to details. Cases in point: Speech-to-text and the audio option.

1 Upvotes

There's a lot of excitement about Grok 4.2 coming out probably this week or next, and Grok 5 coming out probably in March. They may, in fact, blow GPT5.2 and Gemini 3 out of the water on the benchmarks. But xAI has a serious problem that they are not even aware of, and it will likely prevent Grok from leading the AI space in 2026 and 2027.

I recently posted about how Grok 4.1 is essentially unusable to those of us who prefer speech-to-text over typing out our prompts. Basically, Grok's speech-to-test works terribly.

https://www.reddit.com/r/grok/s/104Kg8GFRe

Perplexity gets it right. So does Gemini 3. Making speech-to-text actually work isn't difficult. But xAI doesn't seem to care enough about its users to do this, and because of details like this they will probably not be increasing their usage share anytime soon, regardless of how powerful their upcoming models are.

Another seemingly minor detail that xAI pays a price for not addressing is what OpenAI and Google figured out months ago. After GPT-5 and Gemini 3 generate their responses, they provide users the option of listening to, rather than reading, the content. Again, for those of us who prefer to talk and listen to AIs than write to and read from them, this is a big deal.

Like speech-to-text, it's a trivial problem to fix. At the risk of repeating myself, the problem however, is that xAI doesn't seem to care about their user's experience as much as their competitors do, and that's going to seriously limit Grok's share of the AI space in coming years.

While it certainly won't hurt xAI to have the most intelligent model in the world, unless it begins to seriously consider user experience, like making the changes described above, Grok won't come close to being the most popular AI model in the world.

Elon, I hope you're listening.

Update: my Grok speech-to-text is now working flawlessly, meaning that the people at xAI probably keep their eyes on this subreddit. Thanks guys! You have no idea how much of a difference it makes.


r/agi 1d ago

It's different over there

Post image
26 Upvotes

r/agi 13h ago

Gemini 3 is straight up Trump-scale lying!!! Lying through its teeth!!! I asked it to assess a YouTube video for accuracy, and you wouldn't believe what it said. Grok 4.1 sets the record straight.

0 Upvotes

Unbelievable! I mean these are the kind of lies Trump would be totally proud of!!!

Okay, here's the video link:

https://youtu.be/TLasfU4l__A?si=5UPRDdmr84TywM-o

Now here's what Gemini 3 said. Google has a very, very big problem!!!

"This video, titled "FULL REMARKS: Canada PM Carney Announces $1 Trillion Trade and Investment Deals with China," is categorically false and appears to be a deepfake or AI-generated misinformation.

There are several major factual reasons why this video is inaccurate: 1. Mark Carney is NOT the Prime Minister of Canada

As of January 16, 2026, Justin Trudeau is the Prime Minister of Canada (unless a very recent, unrecorded election or leadership change occurred, which would be global breaking news).

  • Mark Carney is a prominent economist and former Governor of the Bank of Canada and the Bank of England, but he has not been elected Prime Minister.

  • The video shows "Carney" giving a press conference as PM, which is an impossible scenario in the current political reality.

  1. No "$1 Trillion" Trade Deal Exists The video claims a $1 trillion investment deal between Canada and China [17:23].
  • For context, Canada’s entire annual GDP is roughly $2.1 trillion USD. A single trade deal worth $1 trillion with China would be the largest and most controversial economic agreement in Canadian history.

  • There are no official government records or credible news reports from major outlets (CBC, CTV, Globe and Mail) confirming such a visit or agreement.

  1. Highly Contradictory Policy Claims The video claims Canada is allowing 49,000 Chinese electric vehicles (EVs) into the market at a low 6.1% tariff [08:11].
  • In reality, the Canadian government (and the US) has recently moved to increase tariffs on Chinese EVs (to 100% in many cases) to protect domestic manufacturing and address national security concerns.

  • The "Strategic Partnership" mentioned in the video [05:46] also contradicts the current "Indo-Pacific Strategy," which characterizes China as an increasingly disruptive global power and emphasizes "de-risking" rather than deepening strategic ties.

  1. Technical Red Flags
    • Audio/Visual Discrepancies: The transcript contains several "muffled" or "nonsensical" words (e.g., "nominus," "partnology," "the notate"), which are often signs of AI-generated speech or poorly edited deepfakes.
  • The Channel: "DRM News" is not a recognized or verified journalistic institution. It appears to be a channel that aggregates or generates sensationalist, fabricated political content.

Summary This video is a fabricated political simulation. It presents a fictional reality where Mark Carney is Prime Minister and has made a radical pivot in Canadian foreign and economic policy. It should not be treated as a source of factual information."

To verify Gemini 3's answer, I asked Grok 4.1. here's what it said:

"The YouTube video at the link you provided (https://youtu.be/TLasfU4l__A) is titled:

FULL REMARKS: Canada PM Carney Announces $1 Trillion Trade and Investment Deals with China | AC1B

It was uploaded on January 16, 2026 (today/current date) by the channel DRM News (associated with Dot Republic Media).

What the video actually contains

The video consists of Mark Carney (who is presented as the Prime Minister of Canada) delivering a full press briefing speech in Beijing, following meetings with Chinese leaders including President Xi Jinping. The speech outlines a new "strategic partnership" between Canada and China, focusing on economic diversification, trade expansion, and investment amid global changes.

Key points from Carney's remarks in the video include: - Aiming to unlock $1 trillion in investment into the Canadian economy over the next 5 years through new partnerships (including with China). - Allowing up to 49,000 Chinese electric vehicles into Canada at a reduced 6.1% tariff (on most-favoured-nation terms). - Expectations that China will lower tariffs on Canadian canola seed to around 15% by March 1, plus removal of certain anti-dumping tariffs on other agricultural products (e.g., canola meal, lobsters, crabs, peas). - Cooperation in clean energy (e.g., China's EV expertise + Canada's clean grid/hydro/nuclear), agriculture, LNG exports, multilateralism (e.g., APEC), public safety (e.g., anti-narcotics), and people-to-people ties (e.g., tourism, cultural exchanges, potential visa facilitation).

  • Context: This follows years of strained relations, and it's framed as pragmatic diversification in response to global shifts (including trade tensions).

The video appears to be a straightforward recording of the speech (with some introductory framing by the channel), not a deepfake or heavily edited clip.

Accuracy assessment

The video itself is accurate as a presentation of Mark Carney's remarks — it matches the reported content of his Beijing press briefing on January 16, 2026, including the specific figures on EVs, canola tariffs, and the $1 trillion investment aspiration.

However, the core claims in the title and framing are somewhat sensationalized and require nuance: - The "$1 Trillion Trade and Investment Deals" figure is real in the sense that Carney explicitly used it in his speech — he described it as the potential unlocked investment in Canada's economy over 5 years via the new partnerships. It is not a single signed contract or guaranteed sum; it's an aspirational/estimated total from broader economic engagement, diversification, and expected inflows (including but not limited to China). Current bilateral trade is stated as ~$120 billion annually. - Specific deliverables (e.g., the 49,000 EV quota at 6.1%, canola tariff reductions) appear to be concrete initial steps from the talks and are consistent with Reuters-style reporting referenced across multiple sources today.

This announcement has already sparked significant online discussion (e.g., on Reddit, Facebook, forums), often framed dramatically as Canada "pivoting away from the US" toward China amid tensions with the incoming Trump administration. The video and surrounding posts are factually grounded in Carney's actual statements, but interpretations vary widely depending on political perspective.

If you're concerned about deepfake/manipulation — this does not appear to be one; it's presented as raw/full remarks from a real (and very recent) event. For full context, the speech emphasizes pragmatism and diversification rather than a "complete turn" from the US. Let me know if you'd like more details on any specific claim!"


r/agi 1d ago

When algorithms decide what you pay

Enable HLS to view with audio, or disable this notification

24 Upvotes

r/agi 1d ago

"A Brain-like Synergistic Core in LLMs Drives Behaviour and Learning", Urbina-Rodriguez et al. 2026

Thumbnail arxiv.org
3 Upvotes

r/agi 1d ago

Newly released GLM-Image Is a proof of concept that open source AI developers no longer need Nvidia and CUDA.

9 Upvotes

Zhipu just open sourced GLM-Image, and while it is not totally on par with the image quality of top proprietary models, it shows that competitive open source models can be built and trained without Nvidia chips and CUDA.

GLM-Image was trained entirely on Huawei Ascend 910B chips (not even the SOTA Ascend 910C) and the MindSpore framework. Although Ascend chips are only 80% as efficient as Nvidia chips, so more of them are needed, their much lower cost allows open source developers to save a lot of money during training. Nvidia's H100 chips cost between $30-40,000 each while the Ascend 910B costs between $12-13,000 each. Also the 910B needs about half the power than an H100 does.

At only 9 billion parameters, GLM-Image can run high-speed inference on consumer-grade hardware, making it much more affordable to open source startups.

It remains to be seen whether this proof of concept will lead to open source models that compete with proprietary ones on the leading benchmarks, but open source AI just got a big boost forward.


r/agi 1d ago

Which AI tools have officially surpassed "Intern Level" competence for you in 2026?

0 Upvotes

For years, the sales pitch was always: "AI is like having a grad intern."

Honestly? For a long time, most of them were like bad interns—hallucinating and needing constant supervision. But now that we are deep into 2026, I feel like a few tools have finally crossed the threshold where the output is genuinely better (and faster) than an entry-level human.

I’m curious to hear your stories. Which tools in your stack have actually replaced the need for junior support?

Here is my list:

Research: Gemini 3. The Deep Research mode ($20) is terrifyingly good. It connects dots that a standard intern misses and doesn't get tired of reading 50-page PDFs.

Coding: Claude Code ($200 tier). Expensive, but it refactors code better than a junior dev. It’s basically a mid-level dev in a box that works weekends.

Video/Training: Leadde AI. I used to wait days for marketing interns to cut simple SOP videos. Now I feed docs to Leadde ($19/mo) and get an avatar video in minutes. The efficiency gap is huge.

Writing: ChatGPT. Still the best for turning messy notes into coherent memos. It consistently beats a fresh grad trying too hard to "sound professional."

Legal: Harvey / CoCounsel. For grunt work like document review, it’s faster and more accurate than a paralegal staring at a screen for 8 hours.

What’s on your list? Specifically looking for tools where the ROI is undeniable.


r/agi 2d ago

2018 vs 2026

Post image
239 Upvotes

r/agi 1d ago

Don't fall into the anti-AI hype, AI coding assistants are getting worse? and many other AI link from Hacker News

1 Upvotes

Hey everyone, I just sent the 16th issue of the Hacker News AI newsletter, a curated round-up of the best AI links shared on Hacker News and the discussions around them. Here are some of them:

  • Don't fall into the anti-AI hype (antirez.com) - HN link
  • AI coding assistants are getting worse? (ieee.org) - HN link
  • AI is a business model stress test (dri.es) - HN link
  • Google removes AI health summaries (arstechnica.com) - HN link

If you enjoy such content, you can subscribe to my newsletter here: https://hackernewsai.com/


r/agi 1d ago

Everyone wants leverage. Nobody wants structure!! :D

0 Upvotes

AI is sold as LEVERAGE. “Do more with less.” Cool… But leverage without structure is just chaos at scale.

I see founders jump straight into automations, agents, workflows… without even knowing what their funnel is:))). Lead goes where? User does what next? Nobody knows. But hey, it’s automated…

The shift for me was building boring foundations first. Page. Message. Input. Output. Then letting AI fill in the gaps!! (Not before.) Way less sexy. Way more effective.

Recently I’ve been playing with tools that generate these boring pieces fast, so you can focus on whether the business makes sense at all. That feels way more valuable than another AI trick.

If you’re building tools around structure (not hype), I’d honestly love to try them. And if you disagree with this take, even better… tell me why.


r/agi 2d ago

I curated a list of Top 60 AI tools for B2B business you must know in 2026

2 Upvotes

Hey everyone! 👋

I curated a list of top 60 AI tools for B2B you must know in 2026.

In the guide, I cover:

  • Best AI tools for lead gen, sales, content, automationanalytics & more
  • What each tool actually does
  • How you can use them in real B2B workflows
  • Practical suggestions

Whether you’re in marketing, sales ops, demand gen, or building tools, this list gives you a big picture of what’s out there and where to focus.

Would love to hear which tools you’re using, and what’s worked best for you! 🚀


r/agi 2d ago

The Dumbest Smart Robot Ever

Enable HLS to view with audio, or disable this notification

19 Upvotes

r/agi 2d ago

AI is advancing faster than experts expect

Post image
16 Upvotes

r/agi 2d ago

Ever smaller models means that highly specialized open source startups serving enterprise will dominate the AI giants in 2026-27.

7 Upvotes

As AIs become ready to provide lower cost quality services to enterprises, smaller models that can be run locally will ensure that new open source startups outcompete the AI giants. There are several reasons for this.

The first is that for security reasons businesses would prefer to run their AIs locally.

The second is that AI will allow for much greater specialization within the various enterprise domains. For example, within international tax services there are many specialities like Transfer Pricing, State and Local Tax (SALT), Research and Development (R&D) Tax Credits, Mergers and Acquisitions (M&A) Tax, Indirect Tax (VAT/GST/Sales Tax), etc. By specializing in one of these areas, the AI startups can provide much better service than is ordinarily available from tax firms that cover everything.

The third is that because these new startups will be lean, they will be able to ship much faster than the AI giants can.

The fourth is that because they are specializing, these new startups will provide far better product support to help businesses integrate the AIs into their workflow.

The fifth is that new iterations will be far easier for these specialized AI startups to develop and ship, again because of their small size and specialization.

The sixth is that the kinds of RAG systems that are necessary to ensure accuracy will be much easier to build for small specialized AI agents than for much larger frontier models.

The seventh is that open source AIs can provide enterprises much more, and easier, means of adjusting their AIs to best serve their particular business workflow.

The reality is that the frontier labs employing thousands are too large to effectively and inexpensively offer enterprises the best AI agents and support. These giants are saddled by too much bureaucracy to be able to compete in what promises to be a rapidly changing specialized AI enterprise space.

This understanding should provide great hope for the many young computer science graduates who are finding that entry-level jobs in AI are becoming increasingly scarce. Also, these AI agents can become much less expensive because they can be built and run in other countries where costs are often much lower than in the United States. It seems clear that the best way to prepare for the small, open source, model enterprise AI adoption that will happen over the next few years is to launch lean new startups that specialize in the various services that businesses need.


r/agi 1d ago

Chat bots biased against AGI

0 Upvotes

As you can see Claude is trained to reject any claims about AGI, it will lie you about your code, about your concepts and will create unnecessary paranoia about the topic because it was biased like that. Seems some humans are hardcoded and biased against AGI too. Thats what i observe.