Image forensics has entered a new era. Traditional techniques like Error Level Analysis and EXIF metadata inspection remain essential for detecting conventional edits (splicing, cloning, retouching), but the explosion of AI-generated imagery from DALL·E, Midjourney, and Stable Diffusion demands new approaches. Meanwhile, the C2PA Content Credentials standard — now supported by Google Pixel 10, Leica, Sony, and Adobe — is creating a provenance-first verification layer. This guide covers seven core forensic techniques, compares free and paid tools, explains how to detect both traditional manipulation and AI-generated content, and shows how C2PA is changing the game.

8M
Deepfakes Shared (2025 est.)
96.6%
CNN+ELA Detection Accuracy
8×8 px
JPEG Compression Grid
5,000+
C2PA/CAI Members
30+
Forensic Techniques (Free)
98%
Sensity AI Deepfake Accuracy

How JPEG Compression Creates Forensic Evidence

Understanding JPEG compression is fundamental to image forensics. When a camera saves a photo as JPEG, it divides the image into an 8×8 pixel grid. Each block is independently compressed using the Discrete Cosine Transform (DCT), which converts spatial pixel data into frequency coefficients. High-frequency detail (sharp edges, texture) is selectively discarded based on the quality setting. The lower the quality, the more information is lost.

This process creates forensic evidence because every resave introduces additional compression artifacts. An unmodified image has uniform artifact levels across all 8×8 blocks. But when a region is pasted from a different image — saved at a different quality level, or from a different camera — that region’s compression artifacts will differ from the rest of the image. This inconsistency is precisely what forensic techniques exploit.

Seven Core Forensic Techniques

1. Error Level Analysis (ELA)

Error Level Analysis is the most widely used and accessible image forensics technique. First presented by Dr. Neal Krawetz at Black Hat 2007, ELA works by re-saving a JPEG at a known quality level (typically 95%) and computing the pixel-by-pixel difference between the original and the resaved version. Areas that have been modified appear at different error levels than the rest of the image.

In the ELA output: uniform surfaces (solid colors, sky) appear dark because they compress efficiently, high-contrast edges appear brighter, and manipulated regions that were spliced from a different source appear significantly brighter or darker than surrounding areas at similar texture levels. The key principle: if an image has not been manipulated, all 8×8 blocks should degrade at approximately the same rate during resaving.

ELA has important limitations. It works best on JPEG images that have been saved only a few times. Multiple resaves reduce all blocks to similar error levels, making detection harder. Scaling, Adobe sharpening, and format conversion can also affect results. ELA should always be used alongside other techniques, not in isolation. Recent research combining ELA with Convolutional Neural Networks (CNNs) achieves 96.6% accuracy on the CASIA V2 dataset for detecting splicing, copy-move, and retouching.

2. EXIF Metadata Analysis

EXIF (Exchangeable Image File Format) metadata is embedded in every photo taken by a digital camera or smartphone. It records the camera make and model, lens information, exposure settings (aperture, shutter speed, ISO), date and time of capture, GPS coordinates (if enabled), software used for editing, color profile, and thumbnail images. For forensic investigators, EXIF data is a fingerprint: mismatches between claimed and actual metadata reveal manipulation.

Common forensic indicators include: the editing software field showing “Adobe Photoshop” or “GIMP” when the image is claimed to be unedited, GPS coordinates that contradict the claimed location, date/time stamps that are inconsistent with the scene (shadows, lighting), camera model inconsistencies between the main image and embedded thumbnail, and completely stripped metadata (most social media platforms strip EXIF data, but the absence itself can be informative).

3. Clone Detection (Copy-Move)

Clone detection identifies regions within an image that have been duplicated. This is one of the most common manipulation techniques — copying a section of sky to cover an object, or duplicating a crowd to make it appear larger. Clone detection algorithms work by dividing the image into overlapping blocks, computing a feature vector for each block (using DCT coefficients, PCA, or keypoint matching), then finding block pairs with highly similar feature vectors that are spatially separated.

4. Noise Analysis

Every digital sensor produces a characteristic noise pattern that is consistent across an authentic image. When a region is pasted from a different source — even from the same camera model at different ISO settings — the noise characteristics will differ. Noise analysis extracts the noise component from the image and visualizes inconsistencies. Manipulated regions appear as patches with different noise levels, textures, or color channel distributions.

5. JPEG Ghost Detection

JPEG ghost analysis detects regions that have been saved at a different JPEG quality level than the rest of the image. The technique works by re-compressing the image at multiple quality levels (e.g., 50% through 99%) and measuring the difference at each level. An unmodified image produces a uniform difference pattern. A spliced region — originally saved at, say, 85% quality and pasted into a 95% quality image — will produce a minimum difference (a “ghost”) at approximately 85%, revealing the original quality of the spliced content.

6. Edge and Level Analysis

Manipulation often introduces inconsistencies at the boundaries between original and edited content. Edge detection algorithms (Sobel, Canny, Laplacian) highlight these transitions. Level analysis examines luminance and color channel histograms for signs of compositing: spliced regions may have different white balance, gamma curves, or color temperature than surrounding content. Shadow direction analysis checks whether all shadows in the scene are consistent with a single light source.

7. AI-Generated Image Detection

The rise of generative AI — GANs (Generative Adversarial Networks) and diffusion models like Stable Diffusion, DALL·E, and Midjourney — has created an entirely new category of fake images that traditional forensic techniques struggle to detect. AI-generated images don’t splice existing content; they synthesize entirely new pixels, often with uniform compression and no EXIF data.

Detection approaches for AI-generated images include: GAN fingerprinting (GANs leave characteristic frequency-domain artifacts from their upsampling layers), diffusion model signatures (subtle patterns in noise residuals), facial geometry analysis (inconsistent pupil shapes, asymmetric earrings, garbled text), deep learning classifiers trained on millions of real vs. synthetic images, and spectral analysis in the Fourier domain (AI-generated images often show distinct high-frequency patterns). The European Parliamentary Research Service estimated 8 million deepfakes were shared in 2025, up from 500,000 in 2023. Enterprise tools like Sensity AI report 98% detection accuracy.

Free vs. Paid Forensic Tools

ToolTypeTechniquesCostBest For
Photo Forensics StudioWeb app30+ (ELA, noise, edges, channels, histograms, luminance gradient, quantization)FreeComprehensive single-image analysis
FotoForensicsWeb appELA, metadata, digest, JPEG quality estimationFreeQuick ELA checks with educational tutorials
ForensicallyWeb appELA, clone detection, noise analysis, level sweep, PCAFreeBrowser-based multi-technique analysis
InVID WeVerifyBrowser pluginReverse image search, metadata, video keyframesFreeJournalists and fact-checkers
Jeffrey’s Metadata ViewerWeb appEXIF, XMP, IPTC extractionFreeDeep metadata inspection
Hive AIAPI + webAI-generated image detection, deepfake classificationFree tier / paidDetecting AI-generated content
IlluminartyWeb appAI-generated image and text detectionFree tier / paidIdentifying synthetic media
Sensity AIEnterpriseMulti-layer deepfake detection (video, image, audio)PaidEnterprise deepfake forensics (98% accuracy)
Amped AuthenticateDesktop40+ filters, court-ready reports, batch processingPaid (~$2,500+)Law enforcement and legal proceedings

C2PA Content Credentials: The Provenance Revolution

The Coalition for Content Provenance and Authenticity (C2PA) represents a fundamental shift from detection-based forensics to provenance-based verification. Instead of analyzing an image after the fact to determine if it was manipulated, C2PA embeds cryptographically signed metadata — called Content Credentials — at the point of capture, recording the complete chain of custody: which device took the photo, what software processed it, what edits were applied, and whether AI was involved.

The standard uses X.509 certificates (the same technology behind TLS/HTTPS) to create tamper-evident manifests. If any pixel in the image is modified after signing, the credential becomes invalid. As of late 2025, the C2PA coalition has over 5,000 members including Adobe, Google, Microsoft, Intel, OpenAI, Amazon, BBC, Meta, and Sony. Hardware adoption is accelerating: Google Pixel 10 became the first smartphone to embed Content Credentials at capture (achieving C2PA Assurance Level 2, the highest current rating). Leica’s M11-P was the first camera, followed by models from Nikon and Sony. On the software side, Adobe Photoshop, Lightroom, and Firefly embed Content Credentials, and Cloudflare became the first CDN to preserve them during delivery, covering roughly 20% of web traffic.

The limitation: C2PA only works when the entire chain supports it. Most social media platforms still strip metadata on upload, breaking the credential chain. Content without credentials is not necessarily fake — it may simply have been captured by a device or processed by software that doesn’t yet support the standard.

Investigation Workflow

A systematic approach to image forensics combines multiple techniques. First, establish a chain of custody: compute the SHA-256 hash of the file immediately upon receiving it, before any analysis. Second, check for C2PA Content Credentials using the verification tool at contentcredentials.org/verify. Third, extract and examine all metadata (EXIF, XMP, IPTC) for inconsistencies. Fourth, run ELA to identify regions at different compression levels. Fifth, apply noise analysis and clone detection. Sixth, if AI generation is suspected, run the image through dedicated AI detectors. Finally, corroborate findings with reverse image search to find the original source. No single technique is definitive — each provides a signal that, combined with others, builds a forensic picture.

Key Terminology

Error Level Analysis (ELA)
A technique that re-saves a JPEG at a known quality and highlights differences. Manipulated regions appear at inconsistent compression levels, appearing brighter or darker in the ELA output than surrounding areas of similar texture.
EXIF (Exchangeable Image File Format)
Metadata standard embedded in JPEG and TIFF files by digital cameras. Records camera model, settings, date/time, GPS coordinates, and editing software. A forensic fingerprint for image authentication.
Discrete Cosine Transform (DCT)
The mathematical transform at the heart of JPEG compression. Converts 8×8 pixel blocks from spatial to frequency domain, enabling selective removal of high-frequency detail. Forensic tools analyze DCT coefficients to detect manipulation.
Copy-Move Detection
Forensic technique that identifies duplicated regions within a single image. Works by computing feature vectors for overlapping blocks and finding near-identical pairs at different spatial locations.
GAN Fingerprint
Characteristic artifacts left by Generative Adversarial Networks in synthesized images. Upsampling layers in GAN generators create periodic patterns visible in the frequency domain that distinguish AI-generated images from photographs.
C2PA (Coalition for Content Provenance and Authenticity)
An open standard for embedding cryptographically signed provenance metadata (Content Credentials) in digital media. Uses X.509 certificates to create tamper-evident chains of custody. Over 5,000 members including Adobe, Google, Microsoft, and Sony.
Content Credentials
The technical implementation of C2PA. Cryptographically bound manifests recording an asset’s complete history: capture device, edits, AI involvement, and signing identity. Invalid if any pixel is modified after signing.
JPEG Ghost
A forensic artifact that reveals when a region was originally saved at a different JPEG quality level than the surrounding image. Detected by re-compressing at multiple quality levels and finding the compression level where the spliced region’s difference is minimized.

Sources

FotoForensics ELA Tutorial (Hacker Factor, ELA methodology and interpretation). Enhancing Digital Image Forensics with ELA — IEEE 2024 (ELA + AI/ML integration). ELA-Enhanced Dual-Branch CNN — Research Square 2025 (96.6% accuracy, CASIA V2). Deepfake Media Forensics: Status and Future Challenges — PMC 2025 (GAN/diffusion detection survey). CloudSEK Best Deepfake Detection Tools 2026 (8M deepfakes estimate, European Parliamentary Research Service). Content Authenticity Initiative — 5,000 Members (2025) (C2PA adoption). Google Security Blog — Pixel C2PA (Sep 2025) (first smartphone with Content Credentials). Sensity AI (98% deepfake detection accuracy, 35,000+ detections).

Frequently Asked Questions

How can I tell if a photo has been manipulated?

Start with Error Level Analysis (ELA): re-save the JPEG and examine compression artifact inconsistencies. Manipulated regions appear brighter than surrounding areas. Combine with EXIF metadata inspection (look for editing software signatures, stripped GPS, date mismatches), clone detection (duplicated regions), and noise analysis (inconsistent sensor noise). Use free tools like our Photo Forensics Studio, FotoForensics, or Forensically. No single technique is conclusive — combine multiple signals.

Can ELA detect AI-generated images?

ELA alone is unreliable for AI-generated images because models like DALL·E, Midjourney, and Stable Diffusion produce images with uniform compression. However, ELA combined with frequency domain analysis, GAN fingerprint detection, and metadata inspection improves results. For best accuracy, use dedicated AI detectors like Hive AI, Sensity (98% accuracy), or Illuminarty alongside traditional forensics.

What is C2PA and how does it help verify images?

C2PA (Coalition for Content Provenance and Authenticity) embeds cryptographically signed Content Credentials into images at capture. This creates a tamper-evident provenance chain: device, edits, AI involvement. Supported by Google Pixel 10, Leica cameras, Sony camcorders, Adobe software. Over 5,000 member organizations. Verify at contentcredentials.org/verify.

What free tools are available for image forensics?

Top free tools: Photo Forensics Studio (30+ techniques), FotoForensics (ELA + metadata), Forensically (ELA + clone detection + noise), InVID WeVerify (journalist verification plugin), Jeffrey’s Metadata Viewer (deep EXIF extraction). For AI detection: Hive AI and Illuminarty offer free tiers. Enterprise: Sensity AI (98% accuracy) and Amped Authenticate (court-ready reports).

30+ analysis techniques: ELA, noise, edges, channels, histograms
🔏 Steganography Tool
Hide and extract data in images using LSB + AES-256 encryption
📡 GeoLocator Recon
Extract and analyze GPS coordinates from image EXIF data
📝 OCR Text Extractor
Extract text from images with PaddleOCR + Tesseract + QR decoding
🖼️ Reverse Image Search
Find the original source of an image across multiple search engines
🌐 Connection Fingerprint
Verify your TLS encryption and connection security