llm_dart 0.4.0
llm_dart: ^0.4.0 copied to clipboard
A modular Dart library for AI provider interactions with unified interface for OpenAI, Anthropic, Google, DeepSeek, Ollama, xAI, Groq, ElevenLabs and more.
example/README.md
LLM Dart Examples - Redesigned #
A comprehensive collection of examples for the LLM Dart library, reorganized by user needs and learning paths to help you find the functionality and information you need more easily.
π Quick Navigation #
Choose by Skill Level #
Skill Level | Recommended Path |
---|---|
π’ Beginner | Getting Started β Core Features |
π‘ Intermediate | Core Features β Advanced Features |
π΄ Advanced | Advanced Features β Provider Specific |
Choose by Use Case #
Use Case | Direct Link |
---|---|
Chatbot | 05_use_cases/chatbot.dart |
CLI Tool | 05_use_cases/cli_tool.dart |
Web Service | 05_use_cases/web_service.dart |
MCP Integration | 06_mcp_integration/mcp_concept_demo.dart |
Real-World App | Yumcha - Production Flutter app |
π Directory Structure #
π’ Getting Started #
For: First-time users of LLM Dart
- quick_start.dart - Quick start guide
- provider_comparison.dart - Provider comparison and selection
- basic_configuration.dart - Basic configuration guide
π‘ Core Features #
For: Users who need to understand main functionality
- chat_basics.dart - Basic chat functionality
- streaming_chat.dart - Real-time streaming chat
- tool_calling.dart - Tool calling and function execution
- enhanced_tool_calling.dart - Advanced tool calling patterns
- structured_output.dart - Structured data output
- error_handling.dart - Error handling best practices
π΄ Advanced Features #
For: Users who need deep customization
- reasoning_models.dart - π§ Reasoning models and thinking processes
- multi_modal.dart - Multi-modal processing (images/audio)
- custom_providers.dart - Custom provider development
- performance_optimization.dart - Performance optimization techniques
π― Provider Specific #
For: Users who need specific provider functionality
Provider | Key Features | Example Files |
---|---|---|
OpenAI | GPT models, image generation, assistants | openai/ |
Anthropic | Claude, extended thinking | anthropic/ |
Gemini, multi-modal | google/ | |
DeepSeek | Reasoning models, cost-effective | deepseek/ |
Groq | Ultra-fast inference | groq/ |
Ollama | Local models, privacy-focused | ollama/ |
ElevenLabs | Voice synthesis/recognition | elevenlabs/ |
Others | XAI Grok and more | others/ |
πͺ Real-world Use Cases #
For: Users looking for specific application solutions
- chatbot.dart - Complete chatbot implementation
- cli_tool.dart - Command-line AI assistant
- web_service.dart - HTTP API with AI capabilities
π Production Application #
Real-world example built with LLM Dart
- Yumcha - Cross-platform AI chat application actively developed by the creator of LLM Dart, showcasing real-world integration with multiple providers, real-time streaming, and advanced features
π MCP Integration β FULLY TESTED #
For: Users who want to connect LLMs with external tools via Model Context Protocol
- mcp_concept_demo.dart - π― START HERE - Core MCP concepts
- simple_mcp_demo.dart - Working MCP + LLM integration example
- test_all_examples.dart - π§ͺ ONE-CLICK TEST - Test all examples
- basic_mcp_client.dart - Basic MCP client connection
- custom_mcp_server.dart - Custom MCP server implementation
- mcp_tool_bridge.dart - Bridge between MCP and llm_dart tools
- mcp_with_llm.dart - Advanced MCP + LLM integration
π― Feature Support Matrix #
Feature | OpenAI | Anthropic | DeepSeek | Ollama | Groq | ElevenLabs | |
---|---|---|---|---|---|---|---|
π¬ Basic Chat | β | β | β | β | β | β | β |
π Streaming | β | β | β | β | β | β | β |
π§ Tool Calling | β | β | β | β | β | β | β |
π§ Thinking Process | β | β | β | β | β | β | β |
πΌοΈ Image Processing | β | β | β | β | β | β | β |
π΅ Audio Processing | β | β | β | β | β | β | β |
π Structured Output | β | β | β | β | β | β | β |
π Quick Start #
1. Choose Your First Example #
# Complete beginner - quick experience
dart run 01_getting_started/quick_start.dart
# Experienced - jump to core features
dart run 02_core_features/chat_basics.dart
# Specific needs - jump to corresponding scenario
dart run 05_use_cases/chatbot.dart
# MCP integration - connect LLMs with external tools
dart run 06_mcp_integration/mcp_concept_demo.dart
2. Set Environment Variables #
# Set API keys for the providers you want to use
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
export DEEPSEEK_API_KEY="your-deepseek-key"
export GROQ_API_KEY="your-groq-key"
export ELEVENLABS_API_KEY="your-elevenlabs-key"
3. Run Examples #
cd new_example
dart run 01_getting_started/quick_start.dart
π‘ Learning Recommendations #
π’ Beginner Users #
- Start with
quick_start.dart
- Read
provider_comparison.dart
to choose the right provider - Learn
chat_basics.dart
to master basic conversations - Try
streaming_chat.dart
to experience real-time responses
π‘ Intermediate Users #
- Master
tool_calling.dart
for tool calling - Learn
structured_output.dart
for structured output - Explore
reasoning_models.dart
for reasoning functionality - Choose specific use case examples based on your needs
π΄ Advanced Users #
- Study
custom_providers.dart
for custom development - Optimize performance with
performance_optimization.dart
- Deep dive into specific provider advanced features
- Explore Yumcha for real-world architecture patterns and active development practices
- Integrate into production environments with MCP protocol support
π Related Links #
- Main Project README - Complete library documentation
- API Documentation - Detailed API reference
- GitHub Issues - Bug reports and feature requests
- Discussions - Community discussions
π‘ Tip: If you can't find the example you need, check GitHub Issues or create a new issue to tell us your requirements!