Evaluation of AI-based Image Search: Excire vs. Adobe, Apple & Google


Adobe has recently released the new Lightroom CC and has enhanced it with Adobe Sensei technology. Sensei offers several intelligent cloud services including semantic image analysis and keyword-based image search (so-called AI search). Adobe´s AI search, however, is cloud based and therefore available in Lightroom CC only.

Fortunately, users of Adobe Lightroom Classic CC can use the Excire Search plugins for AI search. Excire Search runs locally with no cloud usage: no uploads, no downloads and the AI machine is running on the local computer. So, if you´re a classic user, Excire Search Plugin is the perfect add-on for optimizing the workflow with AI search, the benefits of which are now becoming more obvious with the new workflows that are possible in Lightroom CC.

In addition to the features available in Lightroom CC, Excire Search offers additional features such as a very useful similarity search and more specific search functions that can find, for example, a group foto with smiling ladies at the beach.

But how well does Excire Search perform compared to Adobe and other competitors? To answer this question we performed a comprehensive evaluation and compared Excire Search with Adobe Lightroom CC, Apple Photos and Google’s Vision Api.

Test Dataset

The dataset used for testing consists of 1500 images in JPG format belonging to fifteen different categories with 100 images for each category. The test images have been chosen randomly from a large database containing mainly images downloaded from flickr. The categories have been chosen randomly from the 500 categories that Excire Search can handle.

None of the test images was used for training (something we can know for sure only for Excire). It would have been nice to test with more classes and images but obtaining the Adobe and Apple labels required time-consuming manual work. Overall, we have been careful to design a representative and unbiased test. The following example images depict the 15 categories (semantic classes) of our test dataset (in alphabetical order from upper left to lower right):

Main Features

The following table summarizes the main features of the evaluated search engines. The given runtime duration denotes the time it takes to upload (Adobe, Google) or import (Apple, Excire) and analyze the 1500 images.

CompanySoftwareCloud vs. LocalNr. of CategoriesRuntime*
AdobeAdobe Lightroom CCcloudunknown13:01 min
ApplePhotoslocal4432several hours**
PRCExcire Search Lr v1.3local5006:13 min**
GoogleGoogle Vision Apicloudunknown22:24 min

*50Mbit/s internet connection with an upload speed of 20Mbit/s. WLAN: 5GHz 110-225 Mbit/s (only used for Lightroom CC)

**on a MacBook Pro 2,6GHz, 8GB Ram, SSD and macOS Sierra 10.12.6.

Cloud vs. Local

Cloud computing is becoming increasingly popular (at least with providers) and a growing number of cloud services are becoming available. An obvious benefit is that powerful servers and computing architectures can be used and can be scaled to match the needs for storage and computational power.

For AI services, an important benefit is that one can use large deep networks that would be too complex to run on a local computer.

Therefore, designing a system that runs locally on simple computers with different architectures is much more challenging and these challenges are likely to limit performance.

Then again, an obvious drawback of a cloud-based workflow for photographers is that one needs to upload and download images. While this might be acceptable for those who use the most popular cameras today (cell phones), photographers who shoot large image files for maximum quality might be more reluctant.

For others, privacy might be an issue and, after all, nobody really likes to lose control.

Excire Search has been designed such that all computations are done locally on the user’s computer. One would thus expect that it cannot match the performance levels of more powerful cloud-based solutions. We were surprised to find that this is not the case: Excire Search performs better and is faster than its competitors.

Test Procedure

For each of the 4 search engines, we performed the same tests to evaluate the results of AI-search. We searched with the 15 possible keywords corresponding to the 15 categories that we evaluated, for example ‘beach’, ‘butterfly’, ‘cat’, etc.

Only single keywords were used, no combinations of keywords. Performance was then quantified by determining the quantities TP, FP, FN and TN.:

An image is considered to be relevant for a particular keyword if it depicts the corresponding content, for example if we search with the keyword ‘cat’, all images depicting a cat are relevant.

Given the dataset described above, for each search we have 100 relevant images (P) and 1400 non-relevant images (N).

  • TP = True Positives: the number of relevant images that were found (nr. of cats that were found when searching for cats)

  • FP = False Positives: the number of relevant images that were found (nr. of cats that were found when searching for cats)

  • FN = False Negatives: the number of relevant-images that were missed (nr. of missed cats when searching for cat)

  • TN = True Negatives: True Negatives: the number of non-relevant images, that were not found (nr. of dogs etc. that were correctly not found when searching for cats)


Finally, we are using the following rates for evaluation:

  • Sensitivity (True Positive Rate or Hit Rate): TPR = TP / (TP + FN)
  • Specificity (True Negative Rate) TNR = TN / (FP + TN)
  • Accuracy: ACC = (TP + TN) / (P + N)

The following figure depicts the average results for the 4 engines plotted as sensitivity vs. specificity. The bars indicate the variance of the result obtained for the different keywords. The ideal result would be a dot with very short bars and placed in the upper right corner of the plot.

Discussion and Verdict

The results clearly show that Apple Photos performs the most restrictive search, meaning that it is tuned for high specificity and low sensitivity. This strategy makes sure we get only few dogs if we search for cats but it also leads, in this case, to the drawback that we miss quite a few cats.

Adobe has obviously chosen the opposite strategy of trying to not miss any cats and give us quite a few false dogs.

Excire and Google are striking a good compromise between sensitivity and specificity and Excire is the best performer in this test with a specificity that is somewhat better than Google´s and a clearly better sensitivity. Regarding runtime, Excire Search is the fastest engine from the user’s perspective.

Class Specific Results

For those interested in more details, the following plots show the class-specific results:


Our Products