Skip to main content
AI & Automation

How I Added AI Features to a Client's Laravel App Without Rewriting Everything

4 min read

Integrated AI into a legacy Laravel app without rewriting the codebase — here's how it went down

LaravelAI IntegrationPHPUAE TechOpenAI

A client in Abu Dhabi came to me last year, mid-sip on his third Arabic coffee: "Our users manually write 300-word descriptions for events. Can the app auto-generate these with some AI flair?"

I had two constraints:

  1. Their Laravel app had been running in production for 4 years.
  2. They weren't paying for a full-scale rewrite.

I’ve shipped 40+ projects in the UAE and GCC. Clients here want results — fast. So I did what any lazy-efficient dev would: glue new AI features onto the existing codebase. Let me unpack how.

Step 1: Don’t Touch the Core

The app stored events in a MySQL database with a plain event_descriptions column. Rewriting the event-creation flow from blade templates to controllers? Nope.

Instead, I:

  • Built a separate /ai-description endpoint.
  • Created a dedicated Vue component loaded only when users clicked "Generate with AI".
  • Left the legacy code untouched.

This mattered in the GCC context — clients rarely want to gamble with systems that already make money.

Step 2: Pick a Battle-Tested AI Engine

We considered fine-tuning a local LLM, but that’d mean hiring data scientists. The client couldn’t afford the timeline.

I went with OpenAI’s GPT-3.5 Turbo API. Reasons:

  • It supports Arabic and English — vital for UAE users.
  • Low latency via their eu-west-3 region (closest to Abu Dhabi’s AWS zones).
  • Laravel-friendly SDKs for async requests.

Step 3: The Service Layer That Saved My Sanity

I created a standalone AIService class in app/Services. Here’s the ugly truth — Laravel’s service classes get messy quick, but this one stuck to one rule: do not touch the database directly.

php
class AIService {
    protected $client;

    public function __construct()
    {
        $this->client = new Client([
            'headers' => [
                'Authorization' => 'Bearer ' . env('OPENAI_KEY'),
                'Content-Type' => 'application/json',
            ]
        ]);
    }

    public function generateEventDescription(array $data): string
    {
        $response = $this->client->post('/v1/chat/completions', [
            'json' => [
                'model' => 'gpt-3.5-turbo',
                'messages' => [
                    ['role' => 'system', 'content' => 'You are a professional content writer for events in the UAE. Always respond in Arabic or English depending on $data["language"]'],
                    ['role' => 'user', 'content' => "Write a 300-word description for an event: Name: {{$data['name']}}, Type: {{$data['type']}}, Date: {{$data['date']}}"]
                ]
            ]
        ]);

        // Handle response parsing, error retries, etc.
    }
}

This isolated the AI logic. If OpenAI went down tomorrow, only the /ai-description endpoint breaks — not the entire app.

The UAE Curveball: Mixed-Language Output

Here’s what I didn’t expect — the model kept mixing Arabic and English in results. For example, it’d write "This event offers great فرص تواصل" (Arabic-English mashup).

Solution? Two changes to the prompt:

  1. Hardcoded the response language in the system message: "Always respond in Arabic."
  2. Added a -- Arabic Text -- delimiter between the request and the model’s output.

Not elegant. But it cut down mixed-language responses by 90%.

Handling Errors in Real-Time (and Real UAE Latency)

The OpenAI API isn’t free, and the client wasn’t thrilled when test runs spiked the bill. I learned this the hard way after a weekend where 12k API calls went through the roof.

We capped the input length and batched requests. But the real fix was implementing Laravel Horizon + Redis queues. This also helped during Dubai’s Ramadan traffic spikes — async processing kept the frontend snappy.

A Real-World Win: Tawasul Limo’s Booking Engine

If you’ve seen my portfolio, I did something similar for Tawasul Limo. They needed AI to auto-suggest limo packages based on user search history. Same principle: AI as a service layer, no rewrite.

The difference here? Tawasul’s data was cleaner (user sessions stored in Redis) — which made model prompting easier.

Final Output Metrics

After 3 weeks:

  • 82% of users accepted AI-generated descriptions without edits
  • 400ms average generation time (down from 1.2s after async setup)
  • Client saved ~80 hours/month of manual writing work

Would I Do It Again?

Honestly, yeah. But I wouldn’t trust the model to handle edge cases out-the-box. The worst 2am moment? Debugging why one client’s event data kept triggering OpenAI errors. Turned out they had a 3AM event start time — which the model interpreted as “night club” no matter the context. Arabic culture nuances, y’all.

If you're adding AI to a legacy Laravel app:

  • Keep AI code decoupled
  • Test prompts with locale/timezone variables
  • Budget for API costs like it’s a third-party vendor

Need help with this? I’ve wasted enough weekends so you don’t have to. Reach out.

S

Sarah

Senior Full-Stack Developer & PMP-Certified Project Lead — Abu Dhabi, UAE

7+ years building web applications for UAE & GCC businesses. Specialising in Laravel, Next.js, and Arabic RTL development.

Work with Sarah