CogniSign Blog

News, updates and commentary on CogniSign products and the world of image recognition technology.

Sunday, April 22, 2007

How visual search helps, and in most cases doesn’t replace, keywords and other text based systems

Visual search and the interactive capabilities provided by the CogniSign technology allow any user to quickly search for photos that are visually similar in some important way, as defined by the user. Natural search that is beyond words becomes possible, and a user can even upload a photo or sketch from her computer to initiate a search.

Once a user has completed a visual search and has found these similar photos, productivity benefits begin to emerge because of synergy with text base systems. Imagine a visual search of a database for a particular butterfly with a distinctive wing span design. Once a user has found these similar images, he can group tag all of them with the name of the butterfly. Or, imagine searching through a collection of photos taken at a graduation party of a girl named Natalie. Photos can be quickly found of her, and they can all be group tagged with ‘Natalie’ and ‘graduation2006’ and other key tags. This group tagging based on visual search can help alleviate the current chronic shortage of relevant and meaningful tags on a site like Yahoo!’s Flickr. Users will be able to group tag dozens (or hundreds in some use cases) of photos at a time.

Monday, March 05, 2007

Scalability is another key value proposition of the CogniSign technology

CogniSign’s core visual search algorithm is so simple and also so powerful that it may be the most highly parallel visual search algorithm out there. What do we mean by this statement? Let’s talk about the end result - it means that our technology can visually search across disparate databases, servers and devices, using the distributed resources of each. This means that visual search can happen to image and video data stored anywhere, without having to move that data from where it resides. Our technology achieves its scalability through these inherent distributable qualities. In a nutshell, our technology can perform visual search across servers, across different datacenters, and even across completely different types of computers and devices.

Tuesday, February 27, 2007

How, specifically, is the CogniSign technology more "human like"?

The key features mentioned in the previous entry are an important starting point in explaining why our technology is more human like. Research has shown that when humans look at a person, object, or other image content, their attention moves from one key feature to the next. This “scanpath” (a more technical term from the cognitive fields of study focusing on human vision) can also be described as “serial attention” or the movement of visual attention from one key feature to the next in a serial manner. Research shows that serial attention is a key part of the human visual system, even in cases where no eye movement can be detected. Our technology moves from key feature to key feature, emulating the human visual cognition process, and the result is a more powerful visual search algorithm. It is similar in important ways to the human visual system, but at the same time is more suitable for computer processing.

Wednesday, February 07, 2007

Use of both color AND shape for visual search

In our last posting, we wrote about indexing approaches used by our competitors, and the fact that CogniSign’s technology is different because it allows the visual search process to focus on key features of a source image. The ability of our technology to consider both color and shape features is a great example of this capability. Indexing approaches used by our competitors summarize the images using numerical attributes (values). Their technology summarizes things like color pattern dispersion, textural qualities of the image (is it a few bold shapes of finely detailed?), etc. But summarizing an image means that you can’t look at key features of it very closely, or prioritize any of them in the search. Key shape features are a good example of a local feature that gets lost in indexing. The CogniSign technology allows you to look at color as a feature, and any type of geometric shape as a feature also, in any combination. This is all accomplished using the same core visual search algorithm. Needless to say, this is more human like!

Monday, January 29, 2007

How is CogniSign visual search technology different?

Historically, image matching has been done using template matching, which is very computationally intensive. Imagine trying to create a collection of templates of a single object in a color image that would be used as a reference to identify, by using matching techniques, a similar object in another image. Templates would have to be created to show different scale, orientation, viewing perspective, lighting conditions, etc. A massive number of templates would be needed to do a good job. Visual search software solutions designed by our competitors have sidestepped this problem by using indexing techniques. Our competitors typically convert digital images into large sets of numerical attributes (values), which summarize the whole image by measuring the various attributes of the image’s pixels. To perform visual search for similar images, an indexing approach tabulates these attributes from the source image and then seeks out and retrieves images with similar tabulated attributes. Generally speaking, the visual search performance using this approach is not good enough for image and video applications today. Our technology goes back to the template matching approach, with two key innovations: in many use cases, we let the user pick the key features of interest on an object or in an image to drive visual search, narrowing and focusing the search task; and 2) we use a proprietary technology to collect these key features and look for content that is similar to it, in a human like way.

Monday, January 08, 2007

What are the unmet market needs for visual search today?

Visual search technology takes source image content and finds visually similar content in a target database of image or video content. Traditional visual search technology on the market today generally doesn’t have adequate performance – the results in many cases aren’t similar, judged from a human perspective. That’s why there really hasn’t been a break out market leader, though a lot of companies have certainly tried. Getting computers to perform visual search well (like a human) is much harder than it sounds. Additionally, there are some major scalability problems with traditional visual search methods, known as indexing. These approaches worked OK from an IT scalability perspective when the target database to be searched was a few hundred thousand images. Today, many emerging image and video applications require search across millions of images, and traditional technology cannot address those needs. So the major unmet needs are 1) visual search performance is not human like, and 2) scalability is a big obstacle.

Thursday, July 06, 2006

Welcome to the CogniSign Blog!

Visual search technology has finally come of age, and we’re here to talk about it.

CogniSign was formed to help computers search through images such as photos and videos as accurately as humans can -- only through millions at a time instead of just a few. Humans can browse through images and intuitively recognize similarities based on shapes, colors, image composition, object proximity, or a combination of these factors. By applying CogniSign’s award-winning “Intelligent Image Recognition Technology” to the task of visual search, now computers can too.

CogniSign was founded by us --
Dr. Lenny Kontsevich, a research scientist studying cognitive psychology, human and computer vision, and artificial intelligence, and Bryan Calkins, a software industry veteran. In this blog, you will hear from the two of us as well as several other key players from the CogniSign team.

We’ll be covering news and updates on CogniSign products -- like our innovative xcavator consumer platform, which has recently been integrated with Yahoo!’s flickr -- as well as insights into what makes our technology so special and commentaries on where we see the industry heading. You might also want to check out our more consumer-focused blog, xcavations.

Although CogniSign has been around for three years, our technology and the market for it are just now maturing. This blog exists to share our views on the evolving industry and to elicit your feedback on how we can help make it better. Your input is more than welcome -- it is necessary. We urge you to test xcavator for yourself and let us know what you think.

Don’t be surprised if you see your personal recommendations incorporated into the next version of our products.

The CogniSign Team