Artwork

Contenuto fornito da Neil C. Hughes. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Neil C. Hughes o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

3528: How Boomi Thinks About Scaling AI Without Losing Control

26:49
 
Condividi
 

Manage episode 525509141 series 2391590
Contenuto fornito da Neil C. Hughes. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Neil C. Hughes o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

What does it really mean to keep humans at the center of AI when agentic systems are accelerating faster than most organizations can govern them?

At AWS re:Invent, I sat down with Michael Bachman from Boomi for a wide-ranging conversation that cut through the hype and focused on the harder questions many leaders are quietly asking.

Michael leads technical and market research at Boomi, spending his time looking five to ten years ahead and translating future signals into decisions companies need to make today. That long view shaped a thoughtful discussion on human-centric AI, trust versus autonomy, and why governance can no longer be treated as an afterthought.

As businesses rush toward agentic AI, swarms of autonomous systems, and large-scale automation, Michael shared why this moment makes him both optimistic and cautious. He explained why security, legal, and governance teams must be involved early, not retrofitted later, and why observability and sovereignty will become non-negotiable as agents move from experimentation into production.

With tens of thousands of agents already deployed through Boomi, the stakes are rising quickly, and organizations that ignore guardrails today may struggle to regain control tomorrow.

We also explored one of the biggest paradoxes of the AI era. The more capable these systems become, the more important human judgment and critical thinking are.

Michael unpacked what it means to stay in the loop or on the loop, how trust in agentic systems should scale gradually, and why replacing human workers outright is often a short-term mindset that creates long-term risk. Instead, he argued that the real opportunity lies in amplifying human capability, enabling smaller teams to achieve outcomes that were previously out of reach.

Looking further ahead, the conversation turned to the limits of large language models, the likelihood of an AI research reset, and why future breakthroughs may come from hybrid approaches that combine probabilistic models, symbolic reasoning, and new hardware architectures. Michael also reflected on how AI is changing how we search, learn, and think, and why fact-checking, creativity, and cognitive discipline matter more than ever as AI assistants become embedded in daily life.

This episode offers a grounded, future-facing perspective on where AI is heading, why integration platforms are becoming connective tissue for modern systems, and how leaders can approach the next few years with both ambition and responsibility.

Useful Links

Tech Talks Daily is sponsored by Denodo

  continue reading

2176 episodi

Artwork
iconCondividi
 
Manage episode 525509141 series 2391590
Contenuto fornito da Neil C. Hughes. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Neil C. Hughes o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

What does it really mean to keep humans at the center of AI when agentic systems are accelerating faster than most organizations can govern them?

At AWS re:Invent, I sat down with Michael Bachman from Boomi for a wide-ranging conversation that cut through the hype and focused on the harder questions many leaders are quietly asking.

Michael leads technical and market research at Boomi, spending his time looking five to ten years ahead and translating future signals into decisions companies need to make today. That long view shaped a thoughtful discussion on human-centric AI, trust versus autonomy, and why governance can no longer be treated as an afterthought.

As businesses rush toward agentic AI, swarms of autonomous systems, and large-scale automation, Michael shared why this moment makes him both optimistic and cautious. He explained why security, legal, and governance teams must be involved early, not retrofitted later, and why observability and sovereignty will become non-negotiable as agents move from experimentation into production.

With tens of thousands of agents already deployed through Boomi, the stakes are rising quickly, and organizations that ignore guardrails today may struggle to regain control tomorrow.

We also explored one of the biggest paradoxes of the AI era. The more capable these systems become, the more important human judgment and critical thinking are.

Michael unpacked what it means to stay in the loop or on the loop, how trust in agentic systems should scale gradually, and why replacing human workers outright is often a short-term mindset that creates long-term risk. Instead, he argued that the real opportunity lies in amplifying human capability, enabling smaller teams to achieve outcomes that were previously out of reach.

Looking further ahead, the conversation turned to the limits of large language models, the likelihood of an AI research reset, and why future breakthroughs may come from hybrid approaches that combine probabilistic models, symbolic reasoning, and new hardware architectures. Michael also reflected on how AI is changing how we search, learn, and think, and why fact-checking, creativity, and cognitive discipline matter more than ever as AI assistants become embedded in daily life.

This episode offers a grounded, future-facing perspective on where AI is heading, why integration platforms are becoming connective tissue for modern systems, and how leaders can approach the next few years with both ambition and responsibility.

Useful Links

Tech Talks Daily is sponsored by Denodo

  continue reading

2176 episodi

All episodes

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci