This is a Zhipu (Z.ai) prodiver for the Vercel AI SDK. It enables seamless integration with Language (GLM), Embedding and Image Models provided on bigmodel.cn or z.ai by ZhipuAI.
# npm
npm i zhipu-ai-provider
# pnpm
pnpm add zhipu-ai-provider
# yarn
yarn add zhipu-ai-provider
# bun
bun add zhipu-ai-providerSet up your .env file / environment with your API key.
ZHIPU_API_KEY=<your-api-key>You can import the default provider instance zhipu from zhipu-ai-provider (This automatically reads the API key from the environment variable ZHIPU_API_KEY):
import { zhipu } from 'zhipu-ai-provider' // for bigmodel.cn
// or
import { zai } from 'zhipu-ai-provider' // for z.aiAlternatively, you can create a provider instance with custom configuration with createZhipu:
import { createZhipu } from 'zhipu-ai-provider';
const zhipu = createZhipu({
baseURL: "https://open.bigmodel.cn/api/paas/v4",
apiKey: "your-api-key"
});You can use the following optional settings to customize the Zhipu provider instance:
- baseURL: string
- Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://open.bigmodel.cn/api/paas/v4.
- Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
- apiKey: string
- Your API key for Zhipu BigModel Platform. If not provided, the provider will attempt to read the API key from the environment variable
ZHIPU_API_KEY.
- Your API key for Zhipu BigModel Platform. If not provided, the provider will attempt to read the API key from the environment variable
- headers: Record<string,string>
- Custom headers to include in the requests.
import { generateText } from 'ai';
import { zhipu } from 'zhipu-ai-provider';
const { text } = await generateText({
model: zhipu('glm-4-plus'),
prompt: 'Why is the sky blue?',
});
console.log(result)To disable thinking for hybrid models like glm-4.7, you can set the think option to disable either in the model options or in the providerOptions:
const { text } = await generateText({
model: zhipu('glm-4.7', {
think: {
type: 'disable'
}, // Disable thinking
}),
prompt: 'Explain quantum computing in simple terms.',
});or
const { text } = await generateText({
model: zhipu('glm-4.7'),
prompt: 'Explain quantum computing in simple terms.',
providerOptions: {
zhipu: {
think: {
type: 'disable'
}
}
}
});const { embedding } = await embed({
model: zhipu.textEmbeddingModel("embedding-3", {
dimensions: 256, // Optional, defaults to 2048
}),
value: "Hello, world!",
});
console.log(embedding);Zhipu supports image generation with glm-image or cogview models, but the api does not return images in base64 or buffer format, so the image urls are returned in the providerMetadata field.
import { experimental_generateImage as generateImage } from 'ai';
import { zhipu } from 'zhipu-ai-provider';
const { image, providerMetadata } = await generateImage({
model: zhipu.ImageModel('cogview-4-250304'),
prompt: 'A beautiful landscape with mountains and a river',
size: '1024x1024', // optional
providerOptions: { // optional
zhipu: {
quality: 'hd'
}
}
});
console.log(providerMetadata.zhipu.images[0].url)- Text generation
- Text embedding
- Image generation
- Chat
- Tools
- Streaming
- Structured output
- Reasoning
- Vision
- Vision Reasoning
- Provider-defined tools
- Video Models
- Audio Models