Artwork

Contenuto fornito da The Mad Botter. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Mad Botter o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

636: Red Hat's James Huang

20:53
 
Condividi
 

Manage episode 525035138 series 2440919
Contenuto fornito da The Mad Botter. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Mad Botter o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord

Alice Promo

  1. AI on Red Hat Enterprise Linux (RHEL)

Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.

Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.

Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.

  1. Rama-Llama & Containerization

Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.

Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.

Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.

  1. Enterprise AI Infrastructure

Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).

Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.

  continue reading

584 episodi

Artwork

636: Red Hat's James Huang

Coder Radio

1,183 subscribers

published

iconCondividi
 
Manage episode 525035138 series 2440919
Contenuto fornito da The Mad Botter. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Mad Botter o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord

Alice Promo

  1. AI on Red Hat Enterprise Linux (RHEL)

Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.

Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.

Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.

  1. Rama-Llama & Containerization

Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.

Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.

Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.

  1. Enterprise AI Infrastructure

Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).

Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.

  continue reading

584 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci