Great question! This is actually a fascinating quirk of how cameras and image formats work:
Camera sensors are physically fixed - they always capture images in the same orientation relative to the camera body. When you rotate your phone, the sensor doesn't physically rotate with it.
So every photo is technically captured in the camera's "native" orientation, and the phone just adds a metadata tag saying "hey, this should be rotated when displayed."
There are several reasons cameras store rotation as metadata instead of actually rotating the image data:
1. Performance - Rotating pixels is computationally expensive. Adding a metadata tag is instant.
2. Storage efficiency - No need to process millions of pixels during capture when you can just add a tiny metadata flag.
3. Flexibility - The original unrotated data is preserved. Different apps can choose how to handle rotation.
4. Battery life - Less processing during photo capture = longer battery life.
The issue is that not all software respects EXIF orientation tags:
- Modern photo apps: ✅ Usually handle it correctly
- Web browsers: ❌ Historically ignored EXIF (though this is improving)
- Basic image viewers: ❌ Often ignore EXIF
- Upload/processing pipelines: ❌ Frequently strip or ignore EXIF
When users upload photos to your web app:
- Photo was taken rotated → stored "sideways" with EXIF tag
- Upload process → may strip EXIF data
- Your app displays → sees sideways image without rotation info
- Result → photo appears wrong to user
This is why you need to either:
- Read and apply EXIF rotation before displaying/processing
- Use AI Vision to detect content orientation directly
- Or a hybrid approach that tries EXIF first, then AI as backup
It's essentially a legacy compatibility issue that the industry has been slowly fixing over the past decade!