Skip to main content
AI & Automation

Using Google Gemini API in a Next.js App: A Practical Walkthrough

5 min read

Integrate Google Gemini API in a Next.js 14.1 app using TypeScript and Firebase.

Next.jsAI integrationGoogle GeminiTypeScriptFirebase

Last month, while working on a real estate platform for a client in Abu Dhabi, I hit a wall. They wanted their property listings to generate AI-powered descriptions that felt authentic in both English and Arabic. After testing a few models, Gemini API gave us the best results for Gulf Arabic. Now I’m writing down exactly how we integrated it in Next.js 14.1 — because no one needs another vague “Hello World” tutorial when you’re trying to ship actual features.

Setting Up the Project Structure in Next.js 14.1

First, I used create-next-app with the --typescript, --eslint, and --app flags. Basic but solid setup:

npx create-next-app@latest -ts --eslint --app my-ai-app

Then installed google-generative-ai SDK (v2.1) and zod for input validation:

bash
npm install google-generative-ai zod

Created a new app/api/gemini/route.ts endpoint and a components/AiForm.tsx component. Firebase came in to handle session tokens — we’d hit rate limits fast without it, which I’ll explain later.

Configuring Gemini API and Handling Authentication

Had to create a service account in Google Cloud Console with roles/geminiAPIUser permissions. Saved the JSON key in .env.local:

GOOGLE_GENAI_API_KEY=your-key-here

Wrote a utility function to initialize the model:

ts
import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI(process.env.GOOGLE_GENAI_API_KEY!);
const model = genAI.getGenerativeModel({ model: "gemini-pro"});

This part was straightforward. What wasn’t? Realizing 75% of our initial prompts got cut off by the 2048-token limit. Had to increase maxOutputTokens in the generation config and trim some verbose prompt instructions.

Building the API Endpoint with POST Requests

The Next.js route handler (app/api/gemini/route.ts) took a prompt and language from the request body:

ts
export async function POST(request: Request) {
  const { prompt, language } = await request.json();
  
  const result = await model.generateContent({
    contents: [{ text: prompt }],
    generationConfig: {
      maxOutputTokens: 4096,
    }
  });
  
  return Response.json({ text: result.response.text() });
}

Wait — why not check the content safety settings? Good catch. We added this later after seeing one of our Arabic queries get flagged for “dangerous” content because of a false positive. Use the safetySettings array to tweak thresholds.

Frontend Component with Form Validation

The AiForm.tsx used React’s useState to manage input and results:

tsx
const [response, setResponse] = useState("");
const [isLoading, setIsLoading] = useState(false);

const handleSubmit = async (e: FormEvent) => {
  e.preventDefault();
  setIsLoading(true);
  
  const res = await fetch("/api/gemini", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ prompt: inputValue, language: "ar" }),
  });
  
  const data = await res.json();
  setResponse(data.text);
  setIsLoading(false);
};

We wrapped validation with zod to prevent empty requests and enforce character limits, which helped avoid $500/hour billing surprises on days when someone (ahem, the intern) accidentally sent base64 image strings to the API.

Case Study: Property Descriptions at Reach Home Properties

For Reach Home Properties, we fed Gemini API structured property data instead of raw prompts. Example input:

“Write a 300-word property description in Arabic for a 3-bedroom villa in Al Reem Island with a sea view, maid’s room, and a swimming pool. Use formal tone.”

The generated output went through a moderation check using Firebase’s Realtime Database to track flagged responses. One funny moment? Gemini called a rooftop terrace “an observation deck for stargazing” in a draft — poetic, but not quite what the client wanted.

Handling Errors and Optimizing Gemini API Usage

We hit the 429 Too Many Requests error twice during testing. Firebase helped here — storing usage counters per session to enforce a 200 requests/hour cap.

Another mistake: assuming gemini-pro handles code generation. For a logistics client in Dubai, we tried using it to generate SQL queries dynamically. Spoiler: it kept mixing up PostgreSQL and MySQL syntax. Switched to using it for text only.

Frequently Asked Questions

How do I fix "Missing API key" errors in Next.js?

Make sure your environment variable name matches Google Cloud’s expected format (GOOGLE_GENAI_API_KEY for their SDK) and that you're not accessing it in the browser. Use Next.js’s process.env correctly — I had to restart the dev server twice after editing .env.local.

What's the cost of Gemini API for UAE businesses?

Google charges based on input/output tokens. As of early 2026, it's $0.00025/1000 tokens input, $0.00075/1000 tokens output for gemini-pro. For a 10,000 monthly active user app, we budget ~$800/month with aggressive caching.

Why do my Arabic responses get cut off after 2048 tokens?

Increase maxOutputTokens in the generation config. Don’t just set it to 4096 and walk away — test different thresholds for each language. We found Arabic outputs sometimes need 30% more tokens than English for the same meaning.

How do I handle Gemini API downtime in production?

We built a retry queue with Supabase that falls back to static templates if a response isn't received within 4 seconds. No, the Google Cloud status page never said there was a problem — but our AWS Lambda logs tell a different story.


If you’re building an AI-driven app in UAE and want someone who’s been burned by these configs multiple times, book a free consultation. I’ve been doing Laravel and Next.js integrations for 7 years, but Gemini’s quirks still give me nightmares some nights.

S

Sarah

Senior Full-Stack Developer & PMP-Certified Project Lead — Abu Dhabi, UAE

7+ years building web applications for UAE & GCC businesses. Specialising in Laravel, Next.js, and Arabic RTL development.

Work with Sarah