This is an automated archive made by the Lemmit Bot.
The original was posted on /r/opensource by /u/BaseballClear8592 on 2026-02-25 02:24:26+00:00.
Hey r/opensource,
I’m a bird photographer, and if you know anything about wildlife photography, you know it involves holding down the shutter and taking thousands of burst shots in a single day. Coming home and manually culling 5,000 photos to find that one perfectly sharp shot with the bird’s eye visible is soul-crushing work.
I couldn’t find a tool that did exactly what I wanted. Almost all the good AI cullers out there are subscription-based or charge per image (and they are expensive). Worse, most of them are trained for weddings/portraits and fail terribly at bird photography. So, I decided to build my own and make it completely free and open-source for everyone.
I recently released SuperPicky, a smart, local AI photo culling desktop app built explicitly for bird/wildlife photographers. It’s completely offline and licensed under GPL-3.0.
How it works & Tech Stack: Instead of just using a generic aesthetics model, I built a pipeline that combines a few different models to mimic how a photographer actually reviews bird photos:
- YOLO11: For precise bird object detection and segmentation masks.
- SuperEyes (Custom): Detects if the bird’s eye is visible and calculates head sharpness (because if the eye isn’t sharp, the photo is usually trash).
- SuperFlier (Custom): Identifies bird-in-flight (BIF) poses and gives them bonus points.
- OSEA (Open Set Entity Annotation): Evaluates overall image aesthetics and composition, while also supporting multiple avian taxonomy standards (like AviList, eBird) for precise species identification.
What it actually does for the user:
- You feed it a folder of photos.
- It processes everything completely offline (local inference).
- It rates photos from 0 to 3 stars based on sharpness and aesthetics (with adjustable thresholds based on your skill level—Beginner to Master).
- The best part: It writes these ratings directly into the RAW file EXIF metadata so everything syncs perfectly when you import the folder into Lightroom.
A 2-Year Journey of Pure “Vibe Coding” I’ve actually been working on this project on and off for 2 years. The craziest part? I barely wrote the core logic by hand. The entire thing was built using “vibe coding” (mostly prompting Cursor and various AI models).
It hasn’t been a smooth ride, though. For version 2.0, my AI tools convinced me to rewrite the whole app natively in Xcode using Swift and CoreML. It was a complete disaster. CoreML’s memory management completely fell apart when trying to load and coordinate multiple complex vision models simultaneously, and the project stalled for half a year.
For version 3.0, I learned my lesson and went back to a Python + PySide6 architecture. While packaging it into standalone executables (especially for Windows + CUDA) is still painful, it made inferring YOLO11 and custom PyTorch models infinitely easier and more stable.
Power of the Community & We’re iterating fast (Come join us!) We are just about to push v4.1.0, which migrates the temp data handling to SQLite to give it a ~1.9x speedup. It supports both macOS (Apple Silicon native) and Windows (CUDA & CPU).
I really have to shout out the open-source community—several awesome contributors have already jumped in to help tweak the code and fix annoying bugs (like weird Sony ARW parsing issues). We are iterating extremely fast right now. Watching this grow from my personal messy script into a fast-moving, community-supported tool has been amazing.
Because my codebase is largely stitched together via vibe coding, I would absolutely love it if some experienced Python developers, CV enthusiasts, or even photographers want to get involved and contribute (whether via PRs or submitting issues). Dealing with packaging native Python AI apps for desktop (especially cross-platform) has been a huge learning curve, and I’m sure my codebase could heavily use some roasting or refactoring suggestions!
You can check out the source code and the app here: github.com/jamesphotography/SuperPicky
Would love to hear any thoughts, feedback, or any roasts of my codebase! Thanks for building such an awesome community.