使用 Imagen 将对象插入图片中


本页介绍了如何使用 修复 功能,通过 Imagen 使用 将对象插入图片中 ,并使用 Firebase AI Logic SDK。

修复是一种 基于蒙版的修改蒙版 是一种数字叠加层,用于定义您要修改的特定区域。

工作原理:您提供原始图片和 相应的蒙版图片(自动生成或由您提供),该蒙版图片 定义了您要添加新内容的区域的蒙版。您还需要 提供文本提示,描述您要添加的内容。然后,模型会在蒙版区域内生成并添加新内容。

例如,您可以为表格添加蒙版,并提示模型添加一瓶 鲜花。

跳转到自动生成蒙版的代码 跳转到提供蒙版的代码

准备工作

仅当使用 Vertex AI Gemini API 作为 API 提供方时可用。

如果您尚未完成 入门指南,请先完成该指南。该指南介绍了如何设置 Firebase 项目、将应用连接到 Firebase、 添加 SDK、为所选的 API 提供方初始化后端服务,以及 创建 ImagenModel 实例。

支持此功能的模型

Imagen 通过其 capability 模型提供图片修改功能:

  • imagen-3.0-capability-001

请注意,对于 Imagen 模型,global 位置 不支持

使用自动生成的蒙版插入对象

在尝试此示例之前,请先完成本指南的 准备工作部分, 以设置您的项目和应用。

以下示例展示了如何使用修复功能将内容插入图片中,即使用自动蒙版生成功能。您提供原始图片和文本提示,Imagen 会自动检测并创建蒙版区域来修改原始图片。

Swift

使用 Imagen 模型进行图片修改不支持 Swift。请在今年晚些时候回来查看!

Kotlin

如需使用自动生成的蒙版插入对象,请指定 ImagenBackgroundMask。使用 editImage() 并将修改配置设置为使用 ImagenEditMode.INPAINT_INSERTION

// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspend fun customizeImage() {
    // Initialize the Vertex AI Gemini API backend service
    // Optionally specify the location to access the model (for example, `us-central1`)
    val ai = Firebase.ai(backend = GenerativeBackend.vertexAI(location = "us-central1"))

    // Create an `ImagenModel` instance with an Imagen "capability" model
    val model = ai.imagenModel("imagen-3.0-capability-001")

    // This example assumes 'originalImage' is a pre-loaded Bitmap.
    // In a real app, this might come from the user's device or a URL.
    val originalImage: Bitmap = TODO("Load your original image Bitmap here")

    // Provide the prompt describing the content to be inserted.
    val prompt = "a vase of flowers on the table"

    // Use the editImage API to insert the new content.
    // Pass the original image, the prompt, and an editing configuration.
    val editedImage = model.editImage(
        referenceImages = listOf(
            ImagenRawImage(originalImage.toImagenInlineImage()),
            ImagenBackgroundMask(), // Use ImagenBackgroundMask() to auto-generate the mask.
        ),
        prompt = prompt,
        // Define the editing configuration for inpainting and insertion.
        config = ImagenEditingConfig(ImagenEditMode.INPAINT_INSERTION)
    )

    // Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}

Java

如需使用自动生成的蒙版插入对象,请指定 ImagenBackgroundMask。使用 editImage() 并将修改配置设置为使用 ImagenEditMode.INPAINT_INSERTION

// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModel imagenModel = FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
        .imagenModel(
                /* modelName */ "imagen-3.0-capability-001");

ImagenModelFutures model = ImagenModelFutures.from(imagenModel);

// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
Bitmap originalImage = null; // TODO("Load your image Bitmap here");

// Provide the prompt describing the content to be inserted.
String prompt = "a vase of flowers on the table";

// Define the list of sources for the editImage call.
// This includes the original image and the auto-generated mask.
ImagenRawImage rawOriginalImage =
    new ImagenRawImage(ImagenInlineImageKt.toImagenInlineImage(originalImage));
// Use ImagenBackgroundMask() to auto-generate the mask.
ImagenBackgroundMask rawMaskedImage = new ImagenBackgroundMask();

ImagenEditingConfig config = new ImagenEditingConfig();

// Use the editImage API to insert the new content.
// Pass the original image, the auto-generated masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage, rawMaskedImage), prompt, config),
    new FutureCallback<ImagenGenerationResponse>() {
        @Override
        public void onSuccess(ImagenGenerationResponse result) {
            if (result.getImages().isEmpty()) {
                Log.d("ImageEditor", "No images generated");
            }
            Bitmap editedImage = ((ImagenInlineImage) result.getImages().get(0)).asBitmap();
            // Process and use the bitmap to display the image in your UI
        }

        @Override
        public void onFailure(Throwable t) {
            // ...
        }
    }, Executors.newSingleThreadExecutor());

Web

Imagen 模型不支持使用 Web 应用进行图片修改。请在今年晚些时候回来查看!

Dart

如需使用自动生成的蒙版插入对象,请指定 ImagenBackgroundMask。使用 editImage() 并将修改配置设置为使用 ImagenEditMode.inpaintInsertion

import 'dart:typed_data';
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

// Initialize FirebaseApp
await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
final ai = FirebaseAI.vertexAI(location: 'us-central1');

// Create an `ImagenModel` instance with an Imagen "capability" model
final model = ai.imagenModel(model: 'imagen-3.0-capability-001');

// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
final Uint8List originalImage = Uint8List(0); // TODO: Load your original image data here.

// Provide the prompt describing the content to be inserted.
final prompt = 'a vase of flowers on the table';

try {
  // Use the editImage API to insert the new content.
  // Pass the original image, the prompt, and an editing configuration.
  final response = await model.editImage(
    sources: [
      ImagenRawImage(originalImage),
      ImagenBackgroundMask(), // Use ImagenBackgroundMask() to auto-generate the mask.
    ],
    prompt: prompt,
    // Define the editing configuration for inpainting and insertion.
    config: const ImagenEditingConfig(
      editMode: ImagenEditMode.inpaintInsertion,
    ),
  );

  // Process the result.
  if (response.images.isNotEmpty) {
    final editedImage = response.images.first.bytes;
    // Use the editedImage (a Uint8List) to display the image, save it, etc.
    print('Image successfully generated!');
  } else {
    // Handle the case where no images were generated.
    print('Error: No images were generated.');
  }
} catch (e) {
  // Handle any potential errors during the API call.
  print('An error occurred: $e');
}

Unity

Imagen 模型不支持使用 Unity 进行图片修改。请在今年晚些时候回来查看!

使用提供的蒙版插入对象

在尝试此示例之前,请先完成本指南的 准备工作部分, 以设置您的项目和应用。

以下示例展示了如何使用修复功能将内容插入图片中,即使用您提供的图片中定义的蒙版。您提供原始图片、文本提示和蒙版图片。

Swift

使用 Imagen 模型进行图片修改不支持 Swift。请在今年晚些时候回来查看!

Kotlin

如需插入对象并提供自己的蒙版图片,请使用蒙版图片指定 ImagenRawMask。使用 editImage() 并将修改配置设置为使用 ImagenEditMode.INPAINT_INSERTION

// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspend fun customizeImage() {
    // Initialize the Vertex AI Gemini API backend service
    // Optionally specify the location to access the model (for example, `us-central1`)
    val ai = Firebase.ai(backend = GenerativeBackend.vertexAI(location = "us-central1"))

    // Create an `ImagenModel` instance with an Imagen "capability" model
    val model = ai.imagenModel("imagen-3.0-capability-001")

    // This example assumes 'originalImage' is a pre-loaded Bitmap.
    // In a real app, this might come from the user's device or a URL.
    val originalImage: Bitmap = TODO("Load your original image Bitmap here")

    // This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
    // In a real app, this might come from the user's device or a URL.
    val maskImage: Bitmap = TODO("Load your masked image Bitmap here")

    // Provide the prompt describing the content to be inserted.
    val prompt = "a vase of flowers on the table"

    // Use the editImage API to insert the new content.
    // Pass the original image, the masked image, the prompt, and an editing configuration.
    val editedImage = model.editImage(
        referenceImages = listOf(
            ImagenRawImage(originalImage.toImagenInlineImage()),
            ImagenRawMask(maskImage.toImagenInlineImage()), // Use ImagenRawMask() to provide your own masked image.
        ),
        prompt = prompt,
        // Define the editing configuration for inpainting and insertion.
        config = ImagenEditingConfig(ImagenEditMode.INPAINT_INSERTION)
    )

    // Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}

Java

如需插入对象并提供自己的蒙版图片,请使用蒙版图片指定 ImagenRawMask。使用 editImage() 并将修改配置设置为使用 ImagenEditMode.INPAINT_INSERTION

// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModel imagenModel = FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
        .imagenModel(
                /* modelName */ "imagen-3.0-capability-001");

ImagenModelFutures model = ImagenModelFutures.from(imagenModel);

// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
Bitmap originalImage = null; // TODO("Load your original image Bitmap here");

// This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
// In a real app, this might come from the user's device or a URL.
Bitmap maskImage = null; // TODO("Load your masked image Bitmap here");

// Provide the prompt describing the content to be inserted.
String prompt = "a vase of flowers on the table";

// Define the list of source images for the editImage call.
ImagenRawImage rawOriginalImage =
    new ImagenRawImage(ImagenInlineImageKt.toImagenInlineImage(originalImage));
// Use ImagenRawMask() to provide your own masked image.
ImagenBackgroundMask rawMaskedImage =
    new ImagenRawMask(ImagenInlineImageKt.toImagenInlineImage(maskImage));

ImagenEditingConfig config = new ImagenEditingConfig();

// Use the editImage API to insert the new content.
// Pass the original image, the masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage, rawMaskedImage), prompt, config), new FutureCallback<ImagenGenerationResponse>() {
    @Override
    public void onSuccess(ImagenGenerationResponse result) {
        if (result.getImages().isEmpty()) {
            Log.d("ImageEditor", "No images generated");
        }
        Bitmap editedImage = ((ImagenInlineImage) result.getImages().get(0)).asBitmap();
        // Process and use the bitmap to display the image in your UI
    }

    @Override
    public void onFailure(Throwable t) {
        // ...
    }
}, Executors.newSingleThreadExecutor());

Web

Imagen 模型不支持使用 Web 应用进行图片修改。请在今年晚些时候回来查看!

Dart

如需插入对象并提供自己的蒙版图片,请使用蒙版图片指定 ImagenRawMask。使用 editImage() 并将修改配置设置为使用 ImagenEditMode.inpaintInsertion

import 'dart:typed_data';
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

// Initialize FirebaseApp
await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
final ai = FirebaseAI.vertexAI(location: 'us-central1');

// Create an `ImagenModel` instance with an Imagen "capability" model
final model = ai.imagenModel(model: 'imagen-3.0-capability-001');

// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
final Uint8List originalImage = Uint8List(0); // TODO: Load your original image data here.

// This example assumes 'maskImage' is a pre-loaded Uint8List that contains the masked area.
// In a real app, this might come from the user's device or a URL.
final Uint8List maskImage = Uint8List(0); // TODO: Load your masked image data here.

// Provide the prompt describing the content to be inserted.
final prompt = 'a vase of flowers on the table';

try {
  // Use the editImage API to insert the new content.
  // Pass the original image, the prompt, and an editing configuration.
  final response = await model.editImage(
    sources: [
      ImagenRawImage(originalImage),
      ImagenRawMask(maskImage), // Use ImagenRawMask() to provide your own masked image.
    ],
    prompt: prompt,
    // Define the editing configuration for inpainting and insertion.
    config: const ImagenEditingConfig(
      editMode: ImagenEditMode.inpaintInsertion,
    ),
  );

  // Process the result.
  if (response.images.isNotEmpty) {
    final editedImage = response.images.first.bytes;
    // Use the editedImage (a Uint8List) to display the image, save it, etc.
    print('Image successfully generated!');
  } else {
    // Handle the case where no images were generated.
    print('Error: No images were generated.');
  }
} catch (e) {
  // Handle any potential errors during the API call.
  print('An error occurred: $e');
}

Unity

Imagen 模型不支持使用 Unity 进行图片修改。请在今年晚些时候回来查看!

最佳实践和限制

我们建议您在修改图片时扩大蒙版。这有助于平滑 修改的边界,使其看起来更令人信服。一般来说,建议使用 1% 或 2% 的扩大值(0.010.02)。


提供反馈 有关您的使用体验Firebase AI Logic