Want to learn more the importance of AI agents and assistants in 2025? Register for Virtual Agents Day here →
Download the AI model guide to learn more →
The world of AI is changing and changing quickly. Martin Keen, Master Inventor is here to help set some expectations for what in AI in 2025. Will Large Language Models (LLM) get bigger? Smaller? Both? What’s in store of AI Agents? Will AI finally be able to remember everything? All of this and more speculation of what 2025 will hold in store.
AI news moves fast. Sign up for a monthly newsletter for AI updates from IBM →
source
I liked your video and shared it on my own Facebook profile with Hungarian subtitles generated by YouTube. Of course, the original link is also included. I’d like to do the same in Romanian so that more people can understand what it’s about. Thank you very much!
the more AI the better, luckily china is outpacing and outranking the US and EU in every STEM metric and will offer considerably higher quality models
of course, the abrahamics will whine about being censored when the AI calls them a pyramid scheme 😂😂
#8: AI Safety / Ethics / Regulation
That generative AI is able to diplay watches without showing 10:10 all the time
agentic AI and domain specific models
mera bhai milind
Trend #8 using AgenticAI to design, build, test and deploy Front and Backoffice apps
When will finally the first text to cad/cae/pcb ai appear? 🙁
robotics
First 3d CPU's with integrated lower bit depth calculation aimed at Neural processing will work like chips on ram to spread the heat its better to have multiple 3d chips. this using the breath the board space with 3d chip design will eventually lead to the AI being done on small tall optical pins all linked wisely in to the heat sink and fans. If you can do a small 3d circuit of a smaller package with more efficiency its better to leave the rest to the optical bus design. at the moment the layouts are too monolithic and that will change.
🎯 Key points for quick navigation:
00:00 🤔 The speaker is predicting AI trends for 2025 without using insider information, relying on previous successful predictions.
00:29 📈 Increasing interest in AI agents, which can perform logical reasoning and action planning, but current models struggle with complex scenarios.
01:30 🕒 Upcoming AI advancement focuses on inference time compute, allowing models more time to process queries, improving reasoning without retraining.
02:54 📊 Future very large models are expected to reach up to 50 trillion parameters, enhancing AI complexity and capability.
03:26 📱 Contrasting with large models, very small models will use fewer resources, enabling AI on personal devices like laptops and phones.
04:27 🔒 Emerging use cases in 2025 include AI systems for customer service, IT optimization, and adaptive cybersecurity.
04:56 🧠 AI with near-infinite memory is on the horizon, capable of recalling extensive conversations, potentially enhancing user experiences.
05:24 🤖 The importance of human-AI collaboration; current systems sometimes underperform when humans and AI work together without optimized augmentation processes.
06:52 📣 The final trend discussion involves audience participation, inviting viewers to predict influential AI trends for 2025.
Made with HARPA AI
More AI everywhere…
Agree with all of these, although I think the focus on smaller models will be greater than larger with edge and MoE gaining multi modal capabilities will be key.
Application convergence through more capable agents. Combined application capabilities will create great synergies and open the eyes of the general public to AI capabilities. This will be most evident in highly capable and connected personal assistants – which I think is really exciting as they will be on everyone's phone and really bring AI to the masses.
Maybe into 2026, I think we will see a greater value being put on subject matter expertise that's not pure-play IT based. We'll start to see less and less actual code and more capable agentic operations where the users have a great value by being a doctor, lawyer, product expert, UX expert and also people working outside of the roles we currently associate with AI as it becomes more mainstream – and that means everyone with a job.
On top of this in late 25 I see the transition to physical AI solutions as this is a natural progression leading into fully fledged AI robotics. As per 1st comment, high efficiency edge compute will power super low latency customised devices, be they robots or IoT devices plus our phones.
Fantastic presentation! Thanks for this video.
Personally, I find AI chatbots and AI-powered customer support to be game-changers in many areas.
By the way, if you ever need datasets for AI projects, we’re a specialized agency for creating high-quality AI datasets!
1. Agentic Ai
2. Inference Time Compute
3.Very Large Model
4. Very Small Model
5. More Advance Use Cases
6. Near Infinite Memory
7. Human – In – The – Loop Augmentation
Very informative
per me il 2025 sarà l'anno di architettura concettuali usando hypervector per capire il contesto e non tanto i token… poi c'è da lavorare sulla cognitività, associativa, implementare memorie. perchè non ha senso avere milioni di token non ha senso che perda il filo del discorso se subentrano priorità… per fare tutto questo serve la memoria da lavoro e di lungo periodo consolidata.
By 2025, Very Small Models are expected to become the most widely used and efficient.
Quantum computing
2:55 are we still talking about AI here? 🤣🤣🤣
Less of calling LLM AI, so in turn less talk about AI and more specific language usage in this area.
No. 8: On-device models, truly private and run on any device via wrappers such as LM Studio
People continuing to over hype Ai … everyone being utterly bored with it all and less and less impressed with Ai images
8. AI-Optimized Hardware
I wonder if llms have given any sort of real return on the energy spent running them. They make money, but what actual return have they brought to society that we didn't already have? Their info isn't dependable, search engines are better for actually learning something. The amount of electricity going through them is gigantic but have they made anything real of obvious value? Seems like other kinds of AI are doing that, not llms. So, why? Why are we doing it? What's the point? Only product I can think of is video games that generate stories and characters in response to what the player does.
I'm all excited about neuromorphic computing
8. AI Governance (Security, identifying shadow AI,…)
Enhanced Security features