Meet Your Instructors:
- Zao Yang — Owner of \newline and previously co-creator of Farmville (200M users, $3B revenue) and Kaspa ($3B market cap). Self-taught in gaming, crypto, deep learning, now generative AI. Newline is used by 250,000+ professionals from Salesforce, Adobe, Disney, Amazon, and more. Newline has built editorial tools using LLMs, article generation using reinforcement learning and LLMs, and instructor outreach tools using LLMs. Newline is currently building generative AI products that will be announced soon.
- Dipen Bhuva. AI/ML researcher with 200+ citations and 16 published research papers. He has 3 tier-1 publications, including Internet of Things (Elsevier), Biomedical Signal Processing and Control (Elsevier), and IEEE Access. In his research journey, he has collaborated with NASA-Glenn Centre, Cleveland Clinic, and the US department of energy for his research papers. He was an official reviewer and has reviewed 100+ research papers from Elsevier, IEEE Transactions, ICRA, MDPI, and other top journals and conferences. He has a PhD from Cleveland State University with a focus on LLMs in cybersecurity. He also has a master's in informatics at Northeastern University.
Who is this for?
Course Teaching Philosophy: Analogies as concept first, Code exercises, Reinforce through mini project and core real project.
Code, Exercises, mini-projects designed like real life, projects that are real life
Support:
AI Engineer Accelerator Program Curriculum:
Unit 0: Foundations — Understand AI applications, how to pick a niche, how to grow your AI product audience & distribution channels
Unit 1: Foundations: Statistics, AI apps, Your first AI app, Understand metrics-based AI development (evaluations), Synthetic data
Unit 2: Core Concepts: Prompt/Context engineering, Embeddings, Multi modal embeddings, RAG, Small Language Models, and 2-tuple fine tuning
Unit 3: Transformers and fine tuning: Transformers, Advanced attention, Instructional fine tuning, Quantized lora fine tuning, Multi modal fine tuning, Embedding fine tuning
Unit 4: Pre-training, post training, and agents: Re-creating GPT-2, Agents and multi agent architecture, Mixture of experts and modern architectures, Advanced RAG, RL for post training, Case studies for financial apps, Case studies for text to code