HOME > Development > BootCAMP for Generative AI, LLM with Full Stack 20 Projects

BootCAMP for Generative AI, LLM with Full Stack 20 Projects

  • Development
  • Apr 28, 2025
SynopsisBootCAMP for Generative AI, LLM with Full Stack 20 Projects,...
BootCAMP for Generative AI, LLM with Full Stack 20 Projects  No.1

BootCAMP for Generative AI, LLM with Full Stack 20 Projects, available at $54.99, has an average rating of 4.15, with 113 lectures, based on 13 reviews, and has 229 subscribers.

You will learn about What is Docker and How to use Docker Advance Docker Usage What are OpenCL and OpenGL and when to use ? (LAB) Tensorflow and Pytorch Installation, Configuration with Docker (LAB)DockerFile, Docker Compile and Docker Compose Debug file configuration (LAB)Different YOLO version, comparisons, and when to use which version of YOLO according to your problem (LAB)Jupyter Notebook Editor as well as Visual Studio Coding Skills (LAB)Learn and Prepare yourself for full stack and c++ coding exercies (LAB)TENSORRT PRECISION FLOAT 32/16 MODEL QUANTIZIATION Key Differences:Explicit vs. Implicit Batch Size (LAB)TENSORRT PRECISION INT8 MODEL QUANTIZIATION (LAB) Visual Studio Code Setup and Docker Debugger with VS and GDB Debugger (LAB) what is ONNX framework C Plus and how to use apply onnx to your custom C ++ problems (LAB) What is TensorRT Framework and how to use apply to your custom problems (LAB) Custom Detection, Classification, Segmentation problems and inference on images and videos (LAB) Basic C ++ Object Oriented Programming (LAB) Advance C ++ Object Oriented Programming (LAB) Deep Learning Problem Solving Skills on Edge Devices, and Cloud Computings with C++ Programming Language (LAB) How to generate High Performance Inference Models on Embedded Device, in order to get high precision, FPS detection as well as less gpu memory consumption (LAB) Visual Studio Code with Docker (LAB) GDB Debugger with SonarLite and SonarCube Debuggers (LAB) yolov4 onnx inference with opencv c++ dnn libraries (LAB) yolov5 onnx inference with opencv c++ dnn libraries (LAB) yolov5 onnx inference with Dynamic C++ TensorRT Libraries (LAB) C++(11/14/17) compiler programming exercies Key Differences: OpenCV AND CUDA/ OPENCV AND TENSORRT (LAB) Deep Dive on React Development with Axios Front End Rest API (LAB) Deep Dive on Flask Rest API with REACT with MySql (LAB) Deep Dive on Text Summarization Inference on Web App (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App (LAB) Deep Dive On Distributed GPU Programming with Natural Language Processing (Large Language Models)) (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App (LAB) Deep Dive on Generative AI use cases, project lifecycle, and model pre-training (LAB) Fine-tuning and evaluating large language models (LAB) Reinforcement learning and LLM-powered applications, ALIGN Fine tunning with User Feedback (LAB) Quantization of Large Language Models with Modern Nvidia GPUs (LAB) C++ OOP TensorRT Quantization and Fast Inference (LAB) Deep Dive on Hugging FACE Library (LAB)Translation ● Text summarization ● Question answering (LAB)Sequence-to-sequence models, ONLY Encoder Based Models, Only Decoder Based Models (LAB)Define the terms Generative AI, large language models, prompt, and describe the transformer architecture that powers LLMs (LAB)Discuss computational challenges during model pre-training and determine how to efficiently reduce memory footprint (LAB)Describe how fine-tuning with instructions using prompt datasets can improve performance on one or more tasks (LAB)Explain how PEFT decreases computational cost and overcomes catastrophic forgetting (LAB)Describe how RLHF uses human feedback to improve the performance and alignment of large language models (LAB)Discuss the challenges that LLMs face with knowledge cut-offs, and explain how information retrieval and augmentation techniques can overcome these challen Recognize and understand the various strategies and techniques used in fine-tuning language models for specialized applications. Master the skills necessary to preprocess datasets effectively, ensuring they are in the ideal format for AI training. Investigate the vast potential of fine-tuned AI models in practical, real-world scenarios across multiple industries. Acquire knowledge on how to estimate and manage the costs associated with AI model training, making the process efficient and economic Distributing Computing for (DDP) Distributed Data Parallelization and Fully Shared Data Parallel across multi GPU/CPU with Pytorch together with Retrieval Augme The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach This course is ideal for individuals who are University Students or New Graduates or Workers or Those want to deploy Deep Learning Models on Edge Devices. or AI experts or Embedded Software Engineer or Natural Language Developers or Machine Learning & Deep Learning Engineerings or Full Stack Developers, Javascript, Python It is particularly useful for University Students or New Graduates or Workers or Those want to deploy Deep Learning Models on Edge Devices. or AI experts or Embedded Software Engineer or Natural Language Developers or Machine Learning & Deep Learning Engineerings or Full Stack Developers, Javascript, Python.

Enroll now: BootCAMP for Generative AI, LLM with Full Stack 20 Projects

Summary

Title: BootCAMP for Generative AI, LLM with Full Stack 20 Projects

Price: $54.99

Average Rating: 4.15

Number of Lectures: 113

Number of Published Lectures: 113

Number of Curriculum Items: 113

Number of Published Curriculum Objects: 113

Original Price: 54.99

Quality Status: approved

Status: Live

What You Will Learn

  • What is Docker and How to use Docker
  • Advance Docker Usage
  • What are OpenCL and OpenGL and when to use ?
  • (LAB) Tensorflow and Pytorch Installation, Configuration with Docker
  • (LAB)DockerFile, Docker Compile and Docker Compose Debug file configuration
  • (LAB)Different YOLO version, comparisons, and when to use which version of YOLO according to your problem
  • (LAB)Jupyter Notebook Editor as well as Visual Studio Coding Skills
  • (LAB)Learn and Prepare yourself for full stack and c++ coding exercies
  • (LAB)TENSORRT PRECISION FLOAT 32/16 MODEL QUANTIZIATION
  • Key Differences:Explicit vs. Implicit Batch Size
  • (LAB)TENSORRT PRECISION INT8 MODEL QUANTIZIATION
  • (LAB) Visual Studio Code Setup and Docker Debugger with VS and GDB Debugger
  • (LAB) what is ONNX framework C Plus and how to use apply onnx to your custom C ++ problems
  • (LAB) What is TensorRT Framework and how to use apply to your custom problems
  • (LAB) Custom Detection, Classification, Segmentation problems and inference on images and videos
  • (LAB) Basic C ++ Object Oriented Programming
  • (LAB) Advance C ++ Object Oriented Programming
  • (LAB) Deep Learning Problem Solving Skills on Edge Devices, and Cloud Computings with C++ Programming Language
  • (LAB) How to generate High Performance Inference Models on Embedded Device, in order to get high precision, FPS detection as well as less gpu memory consumption
  • (LAB) Visual Studio Code with Docker
  • (LAB) GDB Debugger with SonarLite and SonarCube Debuggers
  • (LAB) yolov4 onnx inference with opencv c++ dnn libraries
  • (LAB) yolov5 onnx inference with opencv c++ dnn libraries
  • (LAB) yolov5 onnx inference with Dynamic C++ TensorRT Libraries
  • (LAB) C++(11/14/17) compiler programming exercies
  • Key Differences: OpenCV AND CUDA/ OPENCV AND TENSORRT
  • (LAB) Deep Dive on React Development with Axios Front End Rest API
  • (LAB) Deep Dive on Flask Rest API with REACT with MySql
  • (LAB) Deep Dive on Text Summarization Inference on Web App
  • (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App
  • (LAB) Deep Dive On Distributed GPU Programming with Natural Language Processing (Large Language Models))
  • (LAB) Deep Dive on BERT (LLM) Fine tunning and Emotion Analysis on Web App
  • (LAB) Deep Dive on Generative AI use cases, project lifecycle, and model pre-training
  • (LAB) Fine-tuning and evaluating large language models
  • (LAB) Reinforcement learning and LLM-powered applications, ALIGN Fine tunning with User Feedback
  • (LAB) Quantization of Large Language Models with Modern Nvidia GPUs
  • (LAB) C++ OOP TensorRT Quantization and Fast Inference
  • (LAB) Deep Dive on Hugging FACE Library
  • (LAB)Translation ● Text summarization ● Question answering
  • (LAB)Sequence-to-sequence models, ONLY Encoder Based Models, Only Decoder Based Models
  • (LAB)Define the terms Generative AI, large language models, prompt, and describe the transformer architecture that powers LLMs
  • (LAB)Discuss computational challenges during model pre-training and determine how to efficiently reduce memory footprint
  • (LAB)Describe how fine-tuning with instructions using prompt datasets can improve performance on one or more tasks
  • (LAB)Explain how PEFT decreases computational cost and overcomes catastrophic forgetting
  • (LAB)Describe how RLHF uses human feedback to improve the performance and alignment of large language models
  • (LAB)Discuss the challenges that LLMs face with knowledge cut-offs, and explain how information retrieval and augmentation techniques can overcome these challen
  • Recognize and understand the various strategies and techniques used in fine-tuning language models for specialized applications.
  • Master the skills necessary to preprocess datasets effectively, ensuring they are in the ideal format for AI training.
  • Investigate the vast potential of fine-tuned AI models in practical, real-world scenarios across multiple industries.
  • Acquire knowledge on how to estimate and manage the costs associated with AI model training, making the process efficient and economic
  • Distributing Computing for (DDP) Distributed Data Parallelization and Fully Shared Data Parallel across multi GPU/CPU with Pytorch together with Retrieval Augme
  • The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach
  • Who Should Attend

  • University Students
  • New Graduates
  • Workers
  • Those want to deploy Deep Learning Models on Edge Devices.
  • AI experts
  • Embedded Software Engineer
  • Natural Language Developers
  • Machine Learning & Deep Learning Engineerings
  • Full Stack Developers, Javascript, Python
  • Target Audiences

  • University Students
  • New Graduates
  • Workers
  • Those want to deploy Deep Learning Models on Edge Devices.
  • AI experts
  • Embedded Software Engineer
  • Natural Language Developers
  • Machine Learning & Deep Learning Engineerings
  • Full Stack Developers, Javascript, Python
  • This course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their  deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on  TensorFlow , Pytorch,  Keras models, HuggingFace with Docker Service.

    In addition, one will be able to optimize and quantize TensorRT frameworks for deployment in variety of sectors. Moreover, They will learn deployment of LLM quantized model to Web Pages developed with React, Javascript and FLASK
    Here you will also learn how  to integrate Reinforcement Learning(PPO) to Large Language Model, in order to fine them with Human Feedback based. 
    Candidates will learn how to code and debug in C/C++ Programming languages at least in intermediate level.

    LLM Models used:

  • The Falcon,

  • LLAMA2,

  • BLOOM,

  • MPT,

  • Vicuna,

  • FLAN-T5,

  • GPT2/GPT3, GPT NEOX

  • BERT 101, Distil BERT

  • FINE-Tuning Small Models under supervision of BIG Models

  • and soo onn

    1. Learning and Installation of Docker from scratch

    2. Knowledge of Javscript, HTML ,CSS, Bootstrap

    3. React Hook, DOM and Javacscript Web Development

    4. Deep Dive on Deep Learning Transformer based Natural Language Processing

    5. Python FLASK  Rest API along with MySql

    6. Preparation of DockerFiles, Docker Compose as well as Docker Compose Debug file

    7. Configuration and Installation of Plugin packages in Visual Studio Code

    8. Learning, Installation and Confguration of frameworks such as Tensorflow, Pytorch, Kears with docker images from scratch

    9. Preprocessing and Preparation of Deep learning datasets for training and testing

    10. OpenCV  DNN with C++ Inference

    11. Training, Testing and Validation of Deep Learning frameworks

    12. Conversion of prebuilt models to Onnx  and Onnx Inference on images with C++ Programming

    13. Conversion of onnx model to TensorRT engine with C++ RunTime and Compile Time API

    14. TensorRT engine Inference on images and videos

    15. Comparison of achieved metrices and result between TensorRT and Onnx Inference

    16. Prepare Yourself for C++ Object Oriented Programming Inference!

    17. Ready to solve any programming challenge with C/C++

    18. Read to tackle Deployment issues on Edge Devices as well as Cloud Areas

    19. Large Language Models Fine Tunning

    20. Large Language Models Hands-On-Practice: BLOOM, GPT3-GPT3.5, FLAN-T5 family

    21. Large Language Models Training, Evaluation and User-Defined Prompt IN-Context Learning/On-Line Learning

    22. Human FeedBack Alignment on LLM with Reinforcement Learning (PPO) with Large Language Model : BERT and FLAN-T5

    23. How to Avoid Catastropich Forgetting Program on Large Multi-Task LLM Models.

    24. How to prepare LLM for Multi-Task Problems such as Code Generation, Summarization, Content Analizer, Image Generation.

    25. Quantization of Large Language Models with various existing state-of-art techniques

  • Importante Note:
          In this course, there is not nothing to copy & paste, you will put your hands in every line of project to be successfully LLM and Web Application Developer!

  • You DO NOT need any Special Hardware component. You will be delivering project either on CLOUD or on Your Local Computer.

    Course Curriculum

    Chapter 1: All course summary

    Lecture 1: Course Summary

    Lecture 2: React Hooks

    Lecture 3: Course Overview by Me

    Lecture 4: React DOM

    Lecture 5: React Rest API&Axios

    Lecture 6: Flask Rest API

    Lecture 7: Javascript Basics Concepts

    Lecture 8: Javascript Advance concepts

    Lecture 9: Course Description and what you will learn

    Lecture 10: Recommed Course – DeepLearning

    Chapter 2: Some Demos

    Lecture 1: YoloV7 Fast Inference Demo

    Lecture 2: WebApp-Object Detection Demo

    Chapter 3: Set up Docker Images,Containers, and Visual Code

    Lecture 1: Overall Flow State Diagram for Inference Web APP

    Lecture 2: Docker File Configuration

    Lecture 3: Docker Build and Set Up

    Lecture 4: How to Run Docker RUN

    Lecture 5: Configuration of Docker Container with Visual Code

    Chapter 4: Prepare YoloV7 Fast Precision Server Side

    Lecture 1: Yolov7 Start Implementation

    Lecture 2: Yolov7 Server Implementation 2

    Lecture 3: Yolov7 Server Implementation 3

    Lecture 4: Yolov7 Server Implementation 4

    Lecture 5: Yolov7 Server Implementation 5

    Lecture 6: Yolov7 Server Implementation 6

    Chapter 5: Flask Server Implementation for High Security Web App

    Lecture 1: Flask Server Implementation 1

    Lecture 2: Flask Server Implementation 2

    Lecture 3: Flask Server Sign In Implementation

    Lecture 4: Flask Server Registration Implementation

    Chapter 6: Flask Server with YoloV7 Deep Learning Integration

    Lecture 1: Flask Server & Yolov7 Integration

    Lecture 2: Flask Server & Yolov7 Integration part 2

    Lecture 3: Flask Server & Yolov7 Integration part 3

    Chapter 7: Flask Server Web APP Design

    Lecture 1: Flask Server & Web APP design part 1

    Lecture 2: Flask Web App DL Inference

    Lecture 3: Flask Web App DL Image Inference

    Lecture 4: Flow Diagram for Back-End&Front-End

    Chapter 8: React Web App Inference with Emotion Detection NLP

    Lecture 1: Custom Web App Emotion Detection, BERT, Hugging FACE, React JS, Flask, MySql

    Lecture 2: How to start for Prototyping Large Language Model with Web APP and Flask

    Lecture 3: BERT & Hugging Face Feature Engineering Part 1

    Lecture 4: Feature Engineering and Preprocessing part 2

    Lecture 5: Feature Engineering and Preprocessing part 3

    Lecture 6: Feature Engineering and Preprocessing part 4

    Chapter 9: Pytorch Dataloader & Hugging Face Framework(Large Language Models)

    Lecture 1: Dataloader,Hugging Face Integration

    Lecture 2: Dataloader,Hugging Face Integration Part 2

    Lecture 3: Dataloader,Hugging Face Integration Part 3

    Chapter 10: BERT NLP Transformer : Model Freezing

    Lecture 1: BERT_FINE Part 1

    Chapter 11: Prepare Training and Validation Step with BERT

    Lecture 1: Bert Model Train&Val part 1

    Lecture 2: training part 2

    Lecture 3: train and val part 3

    Lecture 4: Train&Val successful

    Chapter 12: React, Flask, Bert Emotion Inference

    Lecture 1: Pretrained Model Bert and Tokenizer download

    Lecture 2: Where we are and where we have to ??

    Lecture 3: preprocessing setup

    Lecture 4: Model BackBone setup

    Lecture 5: Model Inference Part 1

    Lecture 6: Model Inference Part 2

    Chapter 13: Flask Server Integration with Model Pretrained

    Lecture 1: Flask Server & Inference Part 1

    Lecture 2: Flask Server & Inference Part 2

    Lecture 3: Flask Server & Inference Part 3

    Chapter 14: React Development Web App

    Lecture 1: React Familiarity

    Lecture 2: React Installation

    Lecture 3: React set up part 1

    Lecture 4: react successful installation

    Lecture 5: Main React Component

    Lecture 6: Evaluate Implementation

    Lecture 7: Emotion Analysis component

    Lecture 8: User FeedBackk Route API

    Lecture 9: Non User Feedback Route API

    Lecture 10: Emotion Analysis Implementation Return

    Lecture 11: Demo Emotion Analysis Successfully Implementated

    Chapter 15: Question&Answering React WEB and LLM Transformer Based PDF Analizer

    Lecture 1: Demo Transformer-React

    Lecture 2: React Question Answer Component

    Lecture 3: React Question Answer Component 2

    Lecture 4: LLM Transormer Explanation

    Lecture 5: Flask Route Based Implementation

    Chapter 16: CPlus_Cplus TensorRT Tutorial & Demo

    Lecture 1: CPlus_Cplus TensorRT&Onnx With YoloV4

    Lecture 2: How to implement Onnx Cplus_cplus with YoloV5 Inference

    Chapter 17: Deep Dive into Generative AI and Large Language Models PART 1

    Lecture 1: Generative AI & LLM

    Lecture 2: LLM use cases and Tasks

    Lecture 3: Text generation before transformers

    Lecture 4: Transformer Archiecture Part 1

    Lecture 5: Transformer Archiecture Part 2

    Lecture 6: Transform Based-Translation Task

    Lecture 7: Transform-Encoder-Decoder

    Lecture 8: Prompt&Prompt Engineering

    Instructors

  • BootCAMP for Generative AI, LLM with Full Stack 20 Projects  No.2
    PhD Researcher AI & Robotics Scientist Fikrat Gasimov
    Senior PhD AI & Robotics Scientist & Embedded Software
  • Rating Distribution

  • 1 stars: 1 votes
  • 2 stars: 0 votes
  • 3 stars: 2 votes
  • 4 stars: 2 votes
  • 5 stars: 8 votes
  • Frequently Asked Questions

    How long do I have access to the course materials?

    You can view and review the lecture materials indefinitely, like an on-demand channel.

    Can I take my courses with me wherever I go?

    Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!