Skip to main content
Case Study

Building a Plant Identification Feature with AI: Behind the Scenes of Greeny Corner

5 min read

How Sarah Nasereldeen built an AI-powered plant ID feature with React Native, Laravel, and Google Gemini.

React NativeLaravelAI integrationplant appUAE software development

At 2 a.m. on a Wednesday, my phone buzzed with a notification from TestFlight. A user had just submitted a photo of their plant. The app returned "desert sand viper" as the result. The actual plant? Snake Plant. Classic snake plant move — harmless, but a reminder that AI models have bad days too.

Why AI Image Recognition Fits in a Plant-Care App

People mess up plant names. I’ve been there: staring at a pothos vs. philodendron debate for weeks. My client for Greeny Corner — a local nursery looking to expand online — wanted users in Abu Dhabi and across the GCC to snap a photo and get instant ID. Most plant apps in the UAE at the time required perfect keywords. That doesn’t help when you can’t read the nursery tag in Arabic or English.

The core problem wasn’t the tech — it was user behavior. People zoom in mid-shoot, cover part of the leaf with their thumb, or submit a 500MB raw HEIC file. We had to account for chaos.

Picking the AI Model: Why Google Gemini Won

I tested three models: Clarifai’s plant API, AWS Rekognition, and Google Gemini (via the API). Accuracy wasn’t the only factor — costs for high-volume image recognition add up fast. AWS got cut first. At 1,000 classifications, the cheapest tier would’ve cost my client 1.5x what Gemini charged.

Gemini’s edge came down to training data variety and confidence scores. When shown a split-leaf philodendron partially遮挡 by a cat paw, it returned two options: "philodendron bipinnatifidum" (89% confidence) and "monstera deliciosa" (63%). Competing models just guessed randomly.

Hooking the Frontend Together in React Native

We built Greeny Corner in Expo SDK 54 for speed — client wanted a 4-month deadline. Camera integration was brutal. The default react-native-camera package had latency on cheap Android devices common in secondary UAE cities. After a week of fighting with AndroidManifest.xml, we switched to expo-camera. Pain point: getting preview frames to stop jittering on iOS 17.

Image upload flow:

  1. User takes photo → compresses to JPEG (quality 0.7)
  2. Base64 encoding (no FormData support in Expo at the time)
  3. POST to Laravel API with Axios

Pro tip: Use Platform.OS === 'android' to force lower resolution on devices with less than 3GB RAM.

Laravel API: The Boring-but-Necessary Glue

The app’s brain is a Laravel 10 / PHP 8.2 backend. For image recognition, we needed:

  • Strict validation (no SVGs or 20MP Raw files)
  • Rate limiting per user (no one should fire 100 AI requests/hour)
  • Caching frequent results (palms and aloe vera get ~20% of queries)

The API endpoint looked simple:

php
POST /identify-plant

Reality was messier. The first build crashed when a user uploaded a base64 string with a data:image/png;base64, prefix — the Node.js backend stripped that automatically, but PHP required manual parsing. Wrote a regex to extract the payload and check length (1500 < strlen < 30000).

Why Deploying to UAE App Store Took 3 Weeks

Apple’s review team blocked our first two submissions. Reason? The Arabic text on buttons had 3-pixel alignment issues compared to the preview images. The client insisted on right-to-left support — common sense in a country where 50% of users prefer Arabic.

We ended up creating separate assets for Arabic users. Even tweaked the camera screen: flipped the “Flash” button to the left side because fingers on iPhones usually cover the top-right.

FAQ

How accurate is the AI plant identification?

Depends on the image quality. In controlled tests (frontal shots, even lighting), it hits 89-93% accuracy. In the real world? Around 76%. If the photo includes part of the stem or pot, accuracy drops to 50% — AI isn’t magic.

Can I use the app without internet?

No. Image recognition happens in the cloud. Tried on-device models like TensorFlow Lite, but UAE users’ phones (average 3GB RAM) struggled with the 300MB+ plant datasets.

How much did AI integration add to the budget?

Around 25% of total dev time. Google API costs about $0.002 per request. For 10,000 users/month? ~$150 to Google, vs. ~$380 if we’d used AWS.

Does it support both Arabic and English?

Yes — and not just text translation. We trained the model on regional plant names. A “نخيل” (date palm) in Saudi Arabia is still “Phoenix dactylifera” to the bot.


I specialize in apps that mix AI with real-world use cases — especially for UAE businesses dealing with Arabic/English workflows, like Tawasul Limo or Greeny Corner. If you're trying to ship something that just works, hit me up: Get in touch.

S

Sarah

Senior Full-Stack Developer & PMP-Certified Project Lead — Abu Dhabi, UAE

7+ years building web applications for UAE & GCC businesses. Specialising in Laravel, Next.js, and Arabic RTL development.

Work with Sarah