Drag and drop or browse for any PNG, JPG, or WebP image. The file is sent to the backend where its signature bytes are verified to make sure it's actually an image before anything runs.
Your image is processed by two independent detection systems at the same time. Each one targets a different kind of forgery.
A deep learning segmentation model trained to identify regions of an image that were pasted in from a different source. It outputs a pixel-level mask highlighting suspicious areas.
A classical computer vision pipeline that finds regions duplicated within the same image. It extracts keypoints, matches them, and uses geometric verification to confirm cloned areas.
The original image is drawn to a canvas, and both detection outputs are layered on top. You can toggle between views to examine each finding independently or see everything at once.
The unmodified upload with no overlays.
Red overlay showing pixel regions the U-Net flagged as spliced. White mask pixels become semi-transparent red.
Green bounding boxes around matched copy-move regions identified by the SIFT pipeline.
Both the red mask and green boxes together on a single view.
Each engine produces its own score. The splicing confidence is the average probability across all flagged pixels. Copy-move detection is binary: if SIFT finds a verified clone, it reports 98% confidence.
The overall score takes the higher of the two, since either type of manipulation is enough to flag the image.
Uploaded images and generated masks are stored temporarily on the server. A background task runs every 10 minutes and deletes any files older than 10 minutes. Nothing is kept permanently.