gemma-3-27b-it-8bit

Run locally on Apple devices with Mirai

Type

Type

Local

From

From

Google

Quantisation

Quantisation

uint8

Size

Size

27B

Source

Source

Hugging Face Logo

This is an MLX-format conversion of Google's Gemma 3 27B instruction-tuned model, quantized to 8-bit precision for efficient inference on Apple Silicon devices. The model is a multimodal language model capable of understanding both images and text, allowing it to process visual content and generate text responses based on combined image and text inputs. It's optimized for running on MLX, Apple's machine learning framework, making it suitable for on-device inference with reduced memory requirements compared to the full-precision version.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

gemma-3-27b-it-8bit

Run locally on Apple devices with Mirai

Type

Local

From

Google

Quantisation

uint8

Size

27B

Source

Hugging Face Logo

This is an MLX-format conversion of Google's Gemma 3 27B instruction-tuned model, quantized to 8-bit precision for efficient inference on Apple Silicon devices. The model is a multimodal language model capable of understanding both images and text, allowing it to process visual content and generate text responses based on combined image and text inputs. It's optimized for running on MLX, Apple's machine learning framework, making it suitable for on-device inference with reduced memory requirements compared to the full-precision version.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...