Generative AI Tech Stack
- Tool Execution / AP Integration
- MCP Protocol
- OpenFunction / Serverless
- Function Calling (OpenAI)
- Tool Calling (Anthropic, Claude 3)
- Functionary
- Foundation Models (LLMs)
- OpenAI (GPT-4, GPT-3.5),
- Mistral,
- Anthropic Calude
- DeepSeek
- Google Gemini
- Cohere, Command R+
- Meta LlaMA
- MosaicML
- Groq
- Aleph Alpha
- XAI (Grok)
- Prompt Engineering & Tuning
- LangChain Prompts
- Promptable
- DSPy
- PromptableUI
- PromptLayer
- Flowise Prompts
- PromptFlow
- Instructor (by Hugging Face)
- Guidance
- ReLLM
- Retrieval & RAG
- LlamaIndex
- OpenChatKit
- Haystack
- RAGatouille
- LangChain Retriever
- Zep (memory storage for RAG)
- Vespa
- Unstructured.io
- Embedding Models
- OpenAI Embeddings
- Alibaba Tongyi Embeddings
- Cohere Embed
- Voyage AI Embeddings
- Hugging Face (E5, BGE, Instructor)
- MiniLM (Microsoft)
- Google Universal Sentence Encoder
- Vector Stores
- ChromaDB
- FAISS
- Pinecone
- Milvus
- Weaviate
- Redis with Vector Search
- Qdrant
- Typesense Vector
- Agents & Tool Use
- LangChain Agents
- Autogen Studio
- AutoGen
- Superagent
- CrewAI
- MetaGPT
- E2B (for sandbox environments)
- Semantic Kernel
- OpenAgents
- AgentVerse
- CrewAI Cloud
- Output Validation & Guardrails
- Guardrails AI
- Nemo Guardrails
- ReLLM
- OutputParser (LangChain)
- Pydantic
- TypoeChat
- Cerbos
- Trulens
- ConvoGuard
- UI/Frontend/Deployment
- Streamlit
- Flowise
- Gradio
- FastAPI
- Vercel
- Next.js
- React
- SvelteKit
- Flask
- Hugging Face Spaces
- Memory Management
- LangChain Memory
- AutoGen Memory
- Semantic Kernel Memory Modules
- Zep
- Vector Memory + ChromAdb
- Workflow Orchestration & Integration
- LangChain
- Reactor (LangChain/AutoGen)
- LLMStack
- Make.com (for no-code workflows)
- Airflow (for scheduled LLM tasks)
- Testing & Evaluation
- Helicone
- Trulens
- Promptfoo
- Weights & Biases (W & B)
- LangSmith
- Evals (OpneAI)
No comments:
Post a Comment