Skip to main content

Image Moderation

AI-powered image moderation to detect unsafe content.

Extension settings

  1. Login to CometChat and select your app.
  2. Go to the Extensions section and enable the Image Moderation extension.
  3. Open up the Settings and choose to Drop messages with NSFW images.

How does it work?

After analyzing, it classifies the image into four categories:

  1. Explicit Nudity
  2. Suggestive Nudity
  3. Violence
  4. Visually Disturbing.

Along with that, you will receive the confidence, on a scale of 0 to 100.

"@injected": {
"extensions": {
"image-moderation": {
"unsafe": "yes/no",
"confidence": "99",
"category": "explicit_nudity/suggestive/violence/visually_disturbing",
"attachments": [
{
"data": {
"name": "1584307225_38928710_1d3e5acc1b009e1c4ce239bedc2851f9.jpeg",
"extension": "jpeg",
"size": 402852,
"mimeType": "image/png",
"url": "https://media.com/1594986359_2067554844_9.png",
"verdict": {
"unsafe": "yes/no",
"confidence": "99",
"category": "explicit_nudity/suggestive/violence/visually_disturbing",
}
},
"error": null,
},
{
"data": {
"name": "1584307225_38928710_1d3e5acc1b009e1c4ce239bedc2851f9.jpeg",
"extension": "jpeg",
"size": 402852,
"mimeType": "image/png",
"url": "https://media.com/1594986359_2067554844_9.png",
"verdict": null,
},
"error": {
"code": "ERROR_CODE",
"message": "Error Message",
"devMessage": "Error message",
"source": "ext-api"
}
}
]
}
}
}

A value for confidence that is less than 50 is likely to be a false-positive. So we recommend moderating only if confidence is higher than 50.

If the image-moderation key is missing, it means that either the extension is not enabled or has timed out.

info

The unsafe, confidence & category keys to the outside of attachments are the result for the first attachment from the attachments array. These have been retained for backward compatibility only.
You can iterate over attachments array for better implementation.

Implementation

You can then either show a warning or drop the image message.

Image

At the recipients' end, from the message object, you can fetch the metadata by calling the getMetadata() method. Using this metadata, you can fetch information whether the image is safe or unsafe.

const metadata = message.getMetadata();
if (metadata != null) {
const injectedObject = metadata["@injected"];
if (injectedObject != null && injectedObject.hasOwnProperty("extensions")) {
const extensionsObject = injectedObject["extensions"];
if (
extensionsObject != null &&
extensionsObject.hasOwnProperty("image-moderation")
) {
const { attachments } = extensionsObject["image-moderation"];
for (const attachment of attachments) {
if (!attachment.error) {
const { unsafe } = attachment.data.verdict;
// Check the other parameters as required.
}
}
}
}
}