ChatFirebaseVertexAI class
Wrapper around Vertex AI for Firebase API (aka Gemini API).
Example:
final chatModel = ChatFirebaseVertexAI();
final messages = [
ChatMessage.humanText('Tell me a joke.'),
];
final prompt = PromptValue.chat(messages);
final res = await chatModel.invoke(prompt);
Setup
To use ChatFirebaseVertexAI
you need to have:
- A Firebase project with Blaze pay-as-you-go pricing plan
aiplatform.googleapis.com
andfirebaseml.googleapis.com
APIs enabled- Firebase SDK initialized in your app
- Recommended: Firebase App Check enabled
Available models
The following models are available:
gemini-1.5-flash
:- text / image / audio -> text model
- Max input token: 1048576
- Max output tokens: 8192
gemini-1.5-pro
:- text / image / audio -> text model
- Max input token: 2097152
- Max output tokens: 8192
gemini-1.0-pro-vision
:- text / image -> text model
- Max input token: 16384
- Max output tokens: 2048
gemini-1.0-pro
- text -> text model
- Max input token: 32760
- Max output tokens: 8192
Mind that this list may not be up-to-date. Refer to the documentation for the updated list.
Call options
You can configure the parameters that will be used when calling the chat completions API in several ways:
Default options:
Use the defaultOptions
parameter to set the default options. These
options will be used unless you override them when generating completions.
final chatModel = ChatFirebaseVertexAI(
defaultOptions: ChatFirebaseVertexAIOptions(
model: 'gemini-1.5-pro-preview',
temperature: 0,
),
);
Call options:
You can override the default options when invoking the model:
final res = await chatModel.invoke(
prompt,
options: const ChatFirebaseVertexAIOptions(temperature: 1),
);
Bind:
You can also change the options in a Runnable
pipeline using the bind
method.
In this example, we are using two totally different models for each question:
final chatModel = ChatFirebaseVertexAI();
const outputParser = StringOutputParser();
final prompt1 = PromptTemplate.fromTemplate('How are you {name}?');
final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?');
final chain = Runnable.fromMap({
'q1': prompt1 | chatModel.bind(const ChatFirebaseVertexAIOptions(model: 'gemini-1.0-pro')) | outputParser,
'q2': prompt2 | chatModel.bind(const ChatFirebaseVertexAIOptions(model: 'gemini-1.0-pro-vision')) | outputParser,
});
final res = await chain.invoke({'name': 'David'});
Tool calling
ChatFirebaseVertexAI supports tool calling.
Check the docs for more information on how to use tools.
Example:
const tool = ToolSpec(
name: 'get_current_weather',
description: 'Get the current weather in a given location',
inputJsonSchema: {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The city and state, e.g. San Francisco, CA',
},
},
'required': ['location'],
},
);
final chatModel = ChatFirebaseVertexAI(
defaultOptions: ChatFirebaseVertexAIOptions(
model: 'gemini-1.5-pro',
temperature: 0,
tools: [tool],
),
);
final res = await model.invoke(
PromptValue.string('What’s the weather like in Boston and Madrid right now in celsius?'),
);
Constructors
- ChatFirebaseVertexAI.new({ChatFirebaseVertexAIOptions defaultOptions = const ChatFirebaseVertexAIOptions(model: defaultModel), FirebaseApp? app, FirebaseAppCheck? appCheck, FirebaseAuth? auth, String? location})
- Create a new ChatFirebaseVertexAI instance.
Properties
- app → FirebaseApp?
-
The
FirebaseApp
to use. If not provided, the default app will be used.final - appCheck → FirebaseAppCheck?
-
The optional
FirebaseAppCheck
to use to protect the project from abuse.final - auth → FirebaseAuth?
-
The optional
FirebaseAuth
to use for authentication.final - defaultOptions → ChatFirebaseVertexAIOptions
-
The default options to use when invoking the
Runnable
.finalinherited - hashCode → int
-
The hash code for this object.
no setterinherited
- location → String?
-
The service location for the
FirebaseVertexAI
instance.final - modelType → String
-
Return type of language model.
no setter
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
Methods
-
batch(
List< PromptValue> inputs, {List<ChatFirebaseVertexAIOptions> ? options}) → Future<List< ChatResult> > -
Batches the invocation of the
Runnable
on the giveninputs
.inherited -
bind(
ChatFirebaseVertexAIOptions options) → RunnableBinding< PromptValue, ChatFirebaseVertexAIOptions, ChatResult> -
Binds the
Runnable
to the givenoptions
.inherited -
call(
List< ChatMessage> messages, {ChatFirebaseVertexAIOptions? options}) → Future<AIChatMessage> -
Runs the chat model on the given messages and returns a chat message.
inherited
-
close(
) → void -
Cleans up any resources associated with it the
Runnable
.inherited -
countTokens(
PromptValue promptValue, {ChatFirebaseVertexAIOptions? options}) → Future< int> - Returns the number of tokens resulting from tokenize the given prompt.
-
getCompatibleOptions(
RunnableOptions? options) → ChatFirebaseVertexAIOptions? -
Returns the given
options
if they are compatible with theRunnable
, otherwise returnsnull
.inherited -
invoke(
PromptValue input, {ChatFirebaseVertexAIOptions? options}) → Future< ChatResult> -
Invokes the
Runnable
on the giveninput
. -
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
pipe<
NewRunOutput extends Object?, NewCallOptions extends RunnableOptions> (Runnable< ChatResult, NewCallOptions, NewRunOutput> next) → RunnableSequence<PromptValue, NewRunOutput> -
Pipes the output of this
Runnable
into anotherRunnable
using aRunnableSequence
.inherited -
stream(
PromptValue input, {ChatFirebaseVertexAIOptions? options}) → Stream< ChatResult> -
Streams the output of invoking the
Runnable
on the giveninput
. -
streamFromInputStream(
Stream< PromptValue> inputStream, {ChatFirebaseVertexAIOptions? options}) → Stream<ChatResult> -
Streams the output of invoking the
Runnable
on the giveninputStream
.inherited -
tokenize(
PromptValue promptValue, {ChatFirebaseVertexAIOptions? options}) → Future< List< int> > - Tokenizes the given prompt using the encoding used by the language model.
-
toString(
) → String -
A string representation of this object.
inherited
-
withFallbacks(
List< Runnable< fallbacks) → RunnableWithFallback<PromptValue, RunnableOptions, ChatResult> >PromptValue, ChatResult> -
Adds fallback runnables to be invoked if the primary runnable fails.
inherited
-
withRetry(
{int maxRetries = 3, FutureOr< bool> retryIf(Object e)?, List<Duration?> ? delayDurations, bool addJitter = false}) → RunnableRetry<PromptValue, ChatResult> -
Adds retry logic to an existing runnable.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited
Constants
- defaultModel → const String
- The default model to use unless another is specified.