Issue #13: Java in Netflix🎬,OpenAI Custom Model Expansion👀, and ChromeOS Mouse Button Customization 🚀🎬🖲️
Let's spice up your morning routine with the latest and most exciting updates from the tech world. So grab your favorite mug, settle in, and let's rock this tech-filled Sunday! 🚀☕.
OpenAI expands its custom model training program🚀
OpenAI expands the Custom Model program for enterprise customers, introducing assisted fine-tuning and custom-trained models. SK Telecom fine-tunes GPT-4 for telecom conversations in Korean, while Harvey collaborates with OpenAI for a custom legal model. OpenAI envisions personalized models for all organizations to enhance AI impact.
Java in Netflix!🎬
Did you know that Netflix heavily utilizes Java for its backend applications? The company began with a microservices architecture, which has since evolved to incorporate new technologies. Initially leveraging Groovy scripts and reactive programming for tailored API responses, Netflix has now migrated to GraphQL Federation for a unified API, eliminating the need for custom backend layers. Netflix prioritizes recent Java versions for performance improvements and actively integrates Spring Boot into its development processes.
Google ChromeOS Update Introduces Customizable Mouse Button Actions🖲️
Google's ChromeOS update allows users to customize mouse button actions, offering options like screenshot capture, muting, unmuting, and emoji insertion for mice with multiple buttons. Additionally, users can assign specific key combinations for actions typically triggered by keyboard shortcuts.
This update, part of ChromeOS M123, is rolling out to users and will enhance the user experience with increased customization options.
Today's TWT picks is about AutoQuant⚡. AutoQuant allows users to quantize models in five formats: GGUF, GPTQ/EXL2, AWQ, and HQQ.
GGUF: ideal for CPU inference (and LM Studio)
GPTQ/EXL2: GPU-based rapid inference
AWQ: vLLM enables rapid inference on GPUs
HQQ: high-quality 2- and 3-bit models for extreme quantization
GGUF only requires a T4 GPU, but the other approaches require an A100 GPU to quantify a 7B model. It automatically uploads converted models to the Hugging Face Hub. Here is the link to explore more about the 💻 AutoQuant!




