Skip to main content
Case Study

Building a Plant Identification Feature with AI: Behind the Scenes of Greeny Corner

4 min read

How I built a plant ID feature in React Native with TensorFlow Lite and Laravel, including deployment hurdles in UAE's desert climate.

AI image recognitionReact Native appLaravel backendUAE techmobile app development

Let me level with you: building a functional plant identifier from scratch felt like trying to teach a cat to use a keyboard. Not because it’s impossible – but because the real challenge was getting everything to behave reliably when users start waving random leaves in front of their phone cameras.

This was for Greeny Corner, a UAE-based iOS app helping desert gardeners keep their plants alive. The client wanted instant ID with confidence scores, care tips, and a database of 200+ regional plants. We launched it last year using React Native with Expo SDK 54 – you might’ve seen it on the App Store. Let’s talk about the AI part.

Choosing the Right AI Stack

You’d think there are ten ways to skin this particular cat, but let’s get technical. Firebase ML was an option, but Google’s autoML pricing for custom models scared me off after accidentally racking up a $300 bill during testing. Instead, we went with TensorFlow Lite (TFLite) + a re-trained MobileNetv2 checkpoint. The model had to run offline – UAE users have enough connectivity issues without depending on flaky servers.

Trained locally on 14,839 images (thanks to the Abu Dhabi Desert Botanical Garden for providing raw footage). Training took 18 hours on an M2 MacBook. Not great, not terrible. The final quantized model was 3.8MB – small enough to ship with the React Native bundle.

The backend? Laravel 10 handles user accounts, care advice storage, and syncing with Firebase for push notifications. Not the flashiest setup, but it works for GCC clients who expect zero downtime during Ramadan night blooms.

The Image Pipeline No One Prepares You For

Don’t assume people will photograph plants like professionals. One user submitted a picture of a Calotropis procera taken through a car window at 80km/h. It looked like a green blur with palm tree smear.

We solved this with Expo’s camera module (via react-native-vision-camera) and TensorJS image processing. Key steps:

  1. Crop image to center – phones rotate incorrectly 30% of the time
  2. Auto-adjust brightness using exif metadata (sunset shots in Al Ain need this)
  3. Compress images to 640×480 before sending to TFLite (cuts processing time by 70%)

Yes, we lost some image fidelity. No, users can’t see the difference if their original photo was shaking worse than a dhow in a storm.

Deployment Gotchas in the UAE Context

The app blew a tire when we launched the Arabic version. Text labels in the camera preview overlapped when translated left-to-right. Fixed it with dynamic alignment styles in React Native based on i18n.locale.

Another time, the model started misidentifying Ziziphus spina-christi as Acer negundo. Turned out UAE’s extreme summer sunlight washed out leaf vein patterns – the model trained on springtime samples. Re-trained with 500+ summer images and adjusted contrast settings in the preprocessing step.

The Moment I Nearly Quit

Here’s the part nobody tells beginners: on-device inference in React Native is flaky. The TFLite binding for iOS would crash if image data exceeded 640×480. We spent three days debugging a buffer overflow issue. The fix? Downscale using Apple’s Vision framework before sending images to TFLite. Stupidly simple. Felt like winning the lottery.

Why Laravel Matters Here

You might wonder why pair React Native with Laravel for AI features. Answer: handling the feedback loop. Every time a user corrects a wrong ID, that feedback goes to Laravel’s MySQL database. We batch-process these corrections to retrain the model weekly using Laravel’s task scheduler.

Built-in inertia at this point – Laravel’s queue system handles the resource-heavy retraining without locking up the API. GCC clients love the “user contributes data” loop because they’re used to apps that feel static and unresponsive.

Scaling Pains We’re Still Solving

Current bottleneck: model retraining takes 4 hours. We’re testing Amazon SageMaker to reduce that – but training a 100-class image model in the cloud costs more than my monthly coffee budget.

Also debating adding an “uncertain” state – right now anything below 52% confidence gets flagged as “Not sure yet”. Users in Fujairah swear this is the worst UX of their lives. Jokes aside, we’re collecting usage metrics to adjust the threshold.

Final Thoughts

This wasn’t my first AI rodeo, but it was the first time building for desert flora. Sometimes I stare enviously at apps like iNaturalist that have endless cloud budget and academic grants. Then I remember: their UAE users still prefer lighter apps like Greeny Corner since their internet cuts out when passing through Hajar Mountains tunnels.

Building this taught me that real-world AI is 20% code, 80% yelling at image resolutions and apologizing to users who expect perfection out the gate. If you’re shipping AI features in GCC apps, budget extra time for light management – both literal sunlight in photos and the emotional kind from demanding clients.

You can grab a copy of Greeny Corner on the App Store. Looking to build something similar? Let’s chat on sarahprofile.com/contact. I’ll tell you which parts of this saga made me pull out real chunks of hair.

S

Sarah

Senior Full-Stack Developer & PMP-Certified Project Lead — Abu Dhabi, UAE

7+ years building web applications for UAE & GCC businesses. Specialising in Laravel, Next.js, and Arabic RTL development.

Work with Sarah