Skip to content

AbdulDD/LLMs-and-Generative-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LLMs and Generative AI

To Do

  • LLM Configuration
  • LLM In-context learning (ICL)
  • LLM Quantization
  • Three LLMs comparison
  • Instruction fine-tuning (IFT): Single-task
  • Instruction fine-tuning (IFT): Multi-task
  • Parameter-efficient fine-tuning (PEFT)
  • Reinforcement learning with human feedback (RLHF)
  • LLM Powered Applications

Generative AI Project Lifecycle

A Generative AI project can be broadly divided into three major stages:

1. Scope & Model Selection

  • Clearly define the business problem or use case before starting development.
  • Identify success metrics and constraints (cost, latency, accuracy, compliance).
  • Model selection criteria: 1. Pretraining alignment: How well the model’s pretraining data and objectives align with your domain and task. 2. Model size: Trade-offs between performance, inference latency, and computational resources. 3. Context window: Maximum input length supported by the model, which impacts document handling, reasoning depth, and memory.

2. Adaptation & Alignment

  • Prompt engineering
  • Fine-tuning or parameter-efficient tuning (e.g., LoRA)
  • Retrieval-Augmented Generation (RAG)
  • Safety, bias, and alignment adjustments
  • The goal is to improve task performance while ensuring reliability and responsible behavior.

3. Application Integration

  • API or service integration
  • Frontend and user experience design
  • Monitoring, logging, and evaluation
  • Cost optimization and scalability
  • Deployment and maintenance

About

Practicing and understanding LLMs and Generative AI

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published