Abstract:
A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is def...Show MoreMetadata
Abstract:
A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 29, Issue: 3, March 2007)
Citations are not available for this document.
Cites in Patents (31)Patent Links Provided by 1790 Analytics
1.
Khojastepour, Mohammad, "Crowded RFID reading"
Inventors:
Khojastepour, Mohammad
Abstract:
A product tagging system is provided. The product tagging system includes at least one RF backscatter transmitter configured to emit a Radio Frequency (RF) signal on a frequency. The product tagging system further includes a plurality of passive RF backscatter tags, each associated with a respective product and configured to reflect and frequency shift the RF signal to a respective different frequency. The product tagging system also includes at least one RF backscatter receiver configured to read the respective product on the respective different frequency by detecting a distributed ambient backscatter signal generated by a reflection and frequency shifting of the RF signal by a corresponding one of the plurality of passive RF backscatter tags.
Assignee:
NEC CORP
Filing Date:
20 March 2020
Grant Date:
04 May 2021
Patent Classes:
Current International Class:
H04Q0052200000, G06K0190770000, G06K0071000000, H01Q0012200000, H04W0043500000, G06K0190670000
2.
Coviello, Emanuele; Lanckriet, Gert, "Audio-based annotation of video"
Inventors:
Coviello, Emanuele; Lanckriet, Gert
Abstract:
A technique for determining annotation items associated with video information is described. During this annotation technique, a content item that includes audio information and the video information is received. For example, a file may be downloaded from a uniform resource locator. Then, the audio information is extracted from the content item, and the audio information is analyzed to determine features or descriptors that characterize the audio information. Note that the features may be determined solely by analyzing the audio information or may be determined by subsequent further analysis of at least some of the video information based on the analysis of the audio information (i.e., sequential or cascaded analysis). Next, annotation items or tags associated with the video information are determined based on the features.
Assignee:
AMAZON TECHNOLOGIES INC
Filing Date:
05 June 2014
Grant Date:
25 July 2017
Patent Classes:
Current International Class:
H04N0098000000, G11B0272800000, H04N0214390000, H04N0214400000, H04N0218400000, H04N0218450000
3.
Wang, Xin-Jing; Zhang, Lei; Liu, Ming; Li, Yi; Ma, Wei-Ying, "Associating media with metadata of near-duplicates"
Inventors:
Wang, Xin-Jing; Zhang, Lei; Liu, Ming; Li, Yi; Ma, Wei-Ying
Abstract:
Techniques for identifying near-duplicates of a media object and associating metadata of the near-duplicates with the media object are described herein. One or more devices implementing the techniques are configured to identify the near duplicates based at least on similarity attributes included in the media object. Metadata is then extracted from the near-duplicates and is associated with the media object as descriptors of the media object to enable discovery of the media object based on the descriptors.
Assignee:
MICROSOFT TECHNOLOGY LICENSING LLC
Filing Date:
28 May 2010
Grant Date:
11 July 2017
Patent Classes:
Current International Class:
G06F0173000000
4.
Makadia, Ameesh; Kumar, Sanjiv, "Annotating images"
Inventors:
Makadia, Ameesh; Kumar, Sanjiv
Abstract:
Methods, systems, and apparatus, including computer program products, for generating data for annotating images automatically. In one aspect, a method includes receiving an input image, identifying one or more nearest neighbor images of the input image from among a collection of images, in which each of the one or more nearest neighbor images is associated with a respective one or more image labels, assigning a plurality of image labels to the input image, in which the plurality of image labels are selected from the image labels associated with the one or more nearest neighbor images, and storing in a data repository the input image having the assigned plurality of image labels. In another aspect, a method includes assigning a single image label to the input image, in which the single image label is selected from labels associated with multiple ranked nearest neighbor images.
Assignee:
GOOGLE INC
Filing Date:
13 March 2013
Grant Date:
29 March 2016
Patent Classes:
Current International Class:
G06K0095400000, G06F0172400000, G06F0173000000, G06K0090000000, G06K0094600000
5.
Liu, Ce; Rubinstein, Michael, "System and method for semantically annotating images"
Inventors:
Liu, Ce; Rubinstein, Michael
Abstract:
Techniques for semantically annotating images in a plurality of images, each image in the plurality of images comprising at least one image region. The techniques include identifying at least two similar images including a first image and a second image, identifying corresponding image regions in the first image and the second image, and assigning, using at least one processor, annotations to image regions in one or more images in the plurality of images by using a metric of fit indicative of a degree of match between the assigned annotations and the corresponding image regions. The metric of fit may depend on at least one annotation for each image in a subset of the plurality of images and the identified correspondence between image regions in the first image and the second image.
Assignee:
MICROSOFT TECHNOLOGY LICENSING LLC
Filing Date:
06 February 2012
Grant Date:
19 January 2016
Patent Classes:
Current International Class:
G06F0173000000
6.
Yee, Yangli Hector; Bengio, Samy; Rosenberg, Charles J.; Murphy-Chutorian, Erik, "Image classification"
Inventors:
Yee, Yangli Hector; Bengio, Samy; Rosenberg, Charles J.; Murphy-Chutorian, Erik
Abstract:
An image classification system trains an image classification model to classify images relative to text appearing with the images. Training images are iteratively selected and classified by the image classification model according to feature vectors of the training images. An independent model is trained for unique n-grams of text. The image classification system obtains text appearing with an image and parses the text into candidate labels for the image. The image classification system determines whether an image classification model has been trained for the candidate labels. When an image classification model corresponding to a candidate label has been trained, the image classification subsystem classifies the image relative to the candidate label. The image is labeled based on candidate labels for which the image is classified as a positive image.
Assignee:
GOOGLE INC
Filing Date:
21 July 2014
Grant Date:
10 November 2015
Patent Classes:
Current International Class:
G06K0096200000, G06K0094600000, G06K0096600000, G06K0095400000, G06K0096000000, G06F0173000000
7.
Bengio, Samy; Murphy-Chutorian, Erik; Yee, Yangli Hector; Rosenberg, Charles J., "Image relevance model"
Inventors:
Bengio, Samy; Murphy-Chutorian, Erik; Yee, Yangli Hector; Rosenberg, Charles J.
Abstract:
Methods, systems, and apparatus, including computer program products, for identifying images relevant to a query are disclosed. An image search subsystem selects images to reference in image search results that are responsive to a query based on an image relevance model that is trained for the query. An independent image relevance model is trained for each unique query that is identified by the image search subsystem. The image relevance models can be applied to images to order image search results obtained for the query. Each relevance model is trained based on content feature values of images that are identified as being relevant to the query (e.g., frequently selected from the image search results) and images that are identified as being relevant to another unique query. The trained model is applied to the content feature values of all known images to generate an image relevance score that can be used to order search results for the query.
Assignee:
GOOGLE INC
Filing Date:
14 August 2013
Grant Date:
03 November 2015
Patent Classes:
Current International Class:
G06K0096600000, G06K0096200000, G06K0096000000, G06K0095400000, G06F0173000000
8.
Li, Yuan; Adam, Hartwig, "Systems and methods for matching visual object components"
Inventors:
Li, Yuan; Adam, Hartwig
Abstract:
Systems and methods for modeling the occurrence of common image components (e.g., sub-regions) in order to improve visual object recognition are disclosed. In one example, a query image may be matched to a training image of an object. A matched region within the training image to which the query image matches may be determined and a determination may be made whether the matched region is located within an annotated image component of the training image. When the matched region matches only to the image component, an annotation associated with the component may be identified. In another example, sub-regions within a plurality of training image corpora may be annotated as common image components including associated information (e.g., metadata). Matching sub-regions appearing in many training images of objects may be down-weighted in the matching process to reduce possible false matches to query images including common image components.
Assignee:
GOOGLE INC
Filing Date:
31 December 2013
Grant Date:
25 August 2015
Patent Classes:
Current International Class:
G06K0096200000, G06K0090000000
9.
Denney, Bradley Scott; Dusberger, Dariusz T., "Systems and methods for cluster analysis with relational truth"
Inventors:
Denney, Bradley Scott; Dusberger, Dariusz T.
Abstract:
Systems and methods for measuring similarity between a set of clusters and a set of object labels, wherein at least two of the object labels are related, receive a first set of clusters, wherein the first set of clusters was formed by clustering objects in a set of objects into clusters of the first set of clusters according to a clustering procedure; and calculate a similarity index between the first set of clusters and a set of object labels based at least in part on a relationship between two or more object labels in the set of object labels.
Assignee:
CANON KK
Filing Date:
05 July 2012
Grant Date:
25 August 2015
Patent Classes:
Current International Class:
G06F0170000000, G06F0173000000
10.
Bengio, Samy; Weston, Jason, "Joint embedding for item association"
Inventors:
Bengio, Samy; Weston, Jason
Abstract:
Methods and systems to associate semantically-related items of a plurality of item types using a joint embedding space are disclosed. The disclosed methods and systems are scalable to large, web-scale training data sets. According to an embodiment, a method for associating semantically-related items of a plurality of item types includes embedding training items of a plurality of item types in a joint embedding space configured in a memory coupled to at least one processor, learning one or more mappings into the joint embedding space for each of the item types to create a trained joint embedding space and one or more learned mappings, and associating one or more embedded training items with a first item based upon a distance in the trained joint embedding space from the first item to each said associated embedded training items. Exemplary item types that may be embedded in the joint embedding space include images, annotations, audio and video.
Assignee:
GOOGLE INC
Filing Date:
01 February 2011
Grant Date:
18 August 2015
Patent Classes:
Current International Class:
G06F0070000000, G06F0173000000
11.
Das, Madirakshi; Loui, Alexander C., "Detecting recurring themes in consumer image collections"
Inventors:
Das, Madirakshi; Loui, Alexander C.
Abstract:
A method of identifying groups of related digital images in a digital image collection, comprising: analyzing each of the digital images to generate associated feature descriptors related to image content or image capture conditions; storing the feature descriptors associated with the digital images in a metadata database; automatically analyzing the metadata database to identify a plurality of frequent itemsets, wherein each of the frequent itemsets is a co-occurring feature descriptor group that occurs in at least a predefined fraction of the digital images; determining a probability of occurrence for each the identified frequent itemsets; determining a quality score for each of the identified frequent itemsets responsive to the determined probability of occurrence; ranking the frequent itemsets based at least on the determined quality scores; and identifying one or more groups of related digital images corresponding to one or more of the top ranked frequent itemsets.
Assignee:
INTELLECTUAL VENTURES FUND 83 LLC
Filing Date:
03 January 2014
Grant Date:
26 May 2015
Patent Classes:
Current U.S. Class:
382170000
Current International Class:
G06K0090000000, G06K0096200000, G06F0173000000
12.
Lu, Juwei; Denney, Bradley Scott, "Systems and methods for creating a visual vocabulary"
Inventors:
Lu, Juwei; Denney, Bradley Scott
Abstract:
Systems and methods for generating a visual vocabulary build a plurality of visual words via unsupervised learning on set of features of a given type; decompose one or more visual words to a collection of lower-dimensional buckets; generate labeled image representations based on the collection of lower dimensional buckets and labeled images, wherein labels associated with an image are associated with a respective representation of the image; and iteratively select a sub-collection of buckets from the collection of lower-dimensional buckets based on the labeled image representations, wherein bucket selection during any iteration after an initial iteration is based at least in part on feedback from previously selected buckets.
Assignee:
CANON KK
Filing Date:
22 August 2012
Grant Date:
10 March 2015
Patent Classes:
Current U.S. Class:
382159000, 382224000, 382225000, 382190000, 382168000
Current International Class:
G06K0096200000
13.
El-Saban, Motaz Ahmed; Wang, Xin-Jing; Sayed, May Abdelreheem, "Real-time annotation and enrichment of captured video"
Inventors:
El-Saban, Motaz Ahmed; Wang, Xin-Jing; Sayed, May Abdelreheem
Abstract:
An annotation suggestion platform may comprise a client and a server, where the client captures a media object and sends the captured object to the server, and the server provides a list of suggested annotations for a user to associate with the captured media object. The user may then select which of the suggested metadata is to be associated or stored with the captured media. In this way, a user may more easily associate metadata with a media object, facilitating the media object's search and retrieval. The server may also provide web page links related to the captured media object. A user interface for the annotation suggestion platform is also described herein, as are optimizations including indexing and tag propagation.
Assignee:
MICROSOFT CORP
Filing Date:
28 May 2010
Grant Date:
02 December 2014
Patent Classes:
Current U.S. Class:
707708000, 707728000, 707748000
Current International Class:
G06F0070000000, G06F0173000000
14.
Yee, Yangli Hector; Bengio, Samy; Rosenberg, Charles J.; Murphy-Chutorian, Erik, "Image classification"
Inventors:
Yee, Yangli Hector; Bengio, Samy; Rosenberg, Charles J.; Murphy-Chutorian, Erik
Abstract:
An image classification system trains an image classification model to classify images relative to text appearing with the images. Training images are iteratively selected and classified by the image classification model according to feature vectors of the training images. An independent model is trained for unique n-grams of text. The image classification system obtains text appearing with an image and parses the text into candidate labels for the image. The image classification system determines whether an image classification model has been trained for the candidate labels. When an image classification model corresponding to a candidate label has been trained, the image classification subsystem classifies the image relative to the candidate label. The image is labeled based on candidate labels for which the image is classified as a positive image.
Assignee:
GOOGLE INC
Filing Date:
05 June 2013
Grant Date:
22 July 2014
Patent Classes:
Current U.S. Class:
382224000, 382155000, 382159000, 382190000, 382305000, 707E17004, 707E17019, 707E17020, 707E17023
Current International Class:
G06K0096200000, G06K0094600000, G06K0096600000, G06K0095400000, G06K0096000000
15.
Kumar, Sanjiv; Rowley, Henry A.; Makadia, Ameesh, "Content-based image ranking"
Inventors:
Kumar, Sanjiv; Rowley, Henry A.; Makadia, Ameesh
Abstract:
Methods, systems, and apparatus, including computer program products, for ranking search results for queries. The method includes calculating a visual similarity score for one or more pairs of images in a plurality of images based on visual features of images in each of the one or more pairs; building a graph of images by linking each of one or more images in the plurality of images to one or more nearest neighbor images based on the visual similarity scores; associating a respective score with each of one or more images in the graph based on data indicative of user behavior relative to the image as a search result for a query; and determining a new score for each of one or more images in the graph based on the respective score of the image, and the respective scores of one or more nearest neighbors to the image.
Assignee:
GOOGLE INC
Filing Date:
25 August 2009
Grant Date:
15 July 2014
Patent Classes:
Current U.S. Class:
382190000, 382159000, 382172000, 382228000, 705014520, 707999006
Current International Class:
G06K0095400000
16.
Li, Yuan; Adam, Hartwig, "Systems and methods for matching visual object components"
Inventors:
Li, Yuan; Adam, Hartwig
Abstract:
Systems and methods for modeling the occurrence of common image components (e.g., sub-regions) in order to improve visual object recognition are disclosed. In one example, a query image may be matched to a training image of an object. A matched region within the training image to which the query image matches may be determined and a determination may be made whether the matched region is located within an annotated image component of the training image. When the matched region matches only to the image component, an annotation associated with the component may be identified. In another example, sub-regions within a plurality of training image corpora may be annotated as common image components including associated information (e.g., metadata). Matching sub-regions appearing in many training images of objects may be down-weighted in the matching process to reduce possible false matches to query images including common image components.
Assignee:
GOOGLE INC
Filing Date:
13 July 2011
Grant Date:
07 January 2014
Patent Classes:
Current U.S. Class:
382159000, 382209000
Current International Class:
G06K0096200000
17.
Das, Madirakshi; Loui, Alexander C., "Detecting recurring themes in consumer image collections"
Inventors:
Das, Madirakshi; Loui, Alexander C.
Abstract:
A method of identifying groups of related digital images in a digital image collection, comprising: analyzing each of the digital images to generate associated feature descriptors related to image content or image capture conditions; storing the feature descriptors associated with the digital images in a metadata database; automatically analyzing the metadata database to identify a plurality of frequent itemsets, wherein each of the frequent itemsets is a co-occurring feature descriptor group that occurs in at least a predefined fraction of the digital images; determining a probability of occurrence for each the identified frequent itemsets; determining a quality score for each of the identified frequent itemsets responsive to the determined probability of occurrence; ranking the frequent itemsets based at least on the determined quality scores; and identifying one or more groups of related digital images corresponding to one or more of the top ranked frequent itemsets.
Assignee:
INTELLECTUAL VENTURES FUND 83 LLC
Filing Date:
30 August 2011
Grant Date:
07 January 2014
Patent Classes:
Current U.S. Class:
382218000
Current International Class:
G06K0096800000
18.
Das, Madirakshi; Loui, Alexander C.; Wood, Mark D., "Method for event-based semantic classification"
Inventors:
Das, Madirakshi; Loui, Alexander C.; Wood, Mark D.
Abstract:
A method of automatically classifying images in a consumer digital image collection, includes generating an event representation of the image collection; computing global time-based features for each event within the hierarchical event representation; computing content-based features for each image in an event within the hierarchical event representation; combining content-based features for each image in an event to generate event-level content-based features; and using time-based features and content-based features for each event to classify an event into one of a pre-determined set of semantic categories.
Assignee:
INTELLECTUAL VENTURES FUND 83 LLC
Filing Date:
19 November 2008
Grant Date:
17 December 2013
Patent Classes:
Current U.S. Class:
382225000, 382159000, 382195000, 382224000, 382305000, 707737000, 707821000
Current International Class:
G06K0096200000, G06K0094600000, G06K0096600000, G06K0095400000, G06K0096000000, G06F0070000000, G06F0173000000, G06F0120000000
19.
Haveliwala, Taher; Gomes, Benedict; Singhal, Amitabh K, "Using game responses to gather data"
Inventors:
Haveliwala, Taher; Gomes, Benedict; Singhal, Amitabh K
Abstract:
A system provides images or questions to multiple game participants and receives labels or answers in response thereto. The system uses the labels or answers for various data gathering purposes.
Assignee:
GOOGLE INC
Filing Date:
03 October 2011
Grant Date:
10 December 2013
Patent Classes:
Current U.S. Class:
707602000, 706020000
Current International Class:
G06F0170000000
20.
Zhang, Lei; Wang, Xin-Jing; Ma, Wei-Ying, "Building a person profile database"
Inventors:
Zhang, Lei; Wang, Xin-Jing; Ma, Wei-Ying
Abstract:
Names of entities, such as people, in an image may be identified automatically. Visually similar images of entities are retrieved, including text proximate to the visually similar images. The collected text is mined for names of entities, and the detected names are analyzed. A name may be associated with the entity in the image, based on the analysis.
Assignee:
MICROSOFT CORP
Filing Date:
09 November 2010
Grant Date:
15 October 2013
Patent Classes:
Current U.S. Class:
382118000, 382305000
Current International Class:
G06K0090000000, G06K0095400000
21.
Fukui, Motofumi; Kato, Noriji; Qi, Wenyuan, "Computer readable medium, apparatus, and method for adding identification information indicating content of a target image using decision trees generated from a learning image"
Inventors:
Fukui, Motofumi; Kato, Noriji; Qi, Wenyuan
Abstract:
A computer readable medium storing a program causing a computer to execute a process for adding image identification information is provided. The process includes: calculating first feature vectors for partial regions selected from a target image to be processed; and adding a piece of first identification information indicating content of the target image to the target image using a group of decision trees that are generated in advance on the basis of second feature vectors calculated for partial regions of a learning image and a piece of second identification information added to the entire learning image.
Assignee:
FUJI XEROX CO LTD
Filing Date:
14 January 2011
Grant Date:
17 September 2013
Patent Classes:
Current U.S. Class:
382226000, 382159000, 382195000, 706048000
Current International Class:
G06K0096800000, G06K0096600000, G06K0096200000, G06F0170000000
22.
Bengio, Samy; Murphy-Chutorian, Erik; Yee, Yangli Hector; Rosenberg, Charles, "Image relevance model"
Inventors:
Bengio, Samy; Murphy-Chutorian, Erik; Yee, Yangli Hector; Rosenberg, Charles
Abstract:
Methods, systems, and apparatus, including computer program products, for identifying images relevant to a query are disclosed. An image search subsystem selects images to reference in image search results that are responsive to a query based on an image relevance model that is trained for the query. An independent image relevance model is trained for each unique query that is identified by the image search subsystem. The image relevance models can be applied to images to order image search results obtained for the query. Each relevance model is trained based on content feature values of images that are identified as being relevant to the query (e.g., frequently selected from the image search results) and images that are identified as being relevant to another unique query. The trained model is applied to the content feature values of all known images to generate an image relevance score that can be used to order search results for the query.
Assignee:
GOOGLE INC
Filing Date:
17 July 2009
Grant Date:
20 August 2013
Patent Classes:
Current U.S. Class:
382305000, 382155000, 382159000, 707E17004, 707E17019, 707E17020, 707E17023
Current International Class:
G06F0173026000, G06K0096600000, G06K0096256000, G06F0173026000, G06F0173024000, G06F0173024000
23.
Yee, Yangli Hector; Bengio, Samy; Rosenberg, Charles; Murphy-Chutorian, Erik, "Image classification"
Inventors:
Yee, Yangli Hector; Bengio, Samy; Rosenberg, Charles; Murphy-Chutorian, Erik
Abstract:
An image classification system trains an image classification model to classify images relative to text appearing with the images. Training images are iteratively selected and classified by the image classification model according to feature vectors of the training images. An independent model is trained for unique n-grams of text. The image classification system obtains text appearing with an image and parses the text into candidate labels for the image. The image classification system determines whether an image classification model has been trained for the candidate labels. When an image classification model corresponding to a candidate label has been trained, the image classification subsystem classifies the image relative to the candidate label. The image is labeled based on candidate labels for which the image is classified as a positive image.
Assignee:
GOOGLE INC
Filing Date:
17 July 2009
Grant Date:
02 July 2013
Patent Classes:
Current U.S. Class:
382224000, 382155000, 382159000, 382190000, 382305000, 707E17004, 707E17019, 707E17020, 707E17023
Current International Class:
G06K0096600000
24.
Makadia, Ameesh; Kumar, Sanjiv, "Annotating images"
Inventors:
Makadia, Ameesh; Kumar, Sanjiv
Abstract:
Methods, systems, and apparatus, including computer program products, for generating data for annotating images automatically. In one aspect, a method includes receiving an input image, identifying one or more nearest neighbor images of the input image from among a collection of images, in which each of the one or more nearest neighbor images is associated with a respective one or more image labels, assigning a plurality of image labels to the input image, in which the plurality of image labels are selected from the image labels associated with the one or more nearest neighbor images, and storing in a data repository the input image having the assigned plurality of image labels. In another aspect, a method includes assigning a single image label to the input image, in which the single image label is selected from labels associated with multiple ranked nearest neighbor images.
Assignee:
GOOGLE INC
Filing Date:
17 April 2009
Grant Date:
16 April 2013
Patent Classes:
Current U.S. Class:
382305000, 382224000, 382229000, 382209000
Current International Class:
G06K0095400000
25.
DAS, Madirakshi; LOUI, Alexander C., "DETECTING RECURRING THEMES IN CONSUMER IMAGE COLLECTIONS"
Inventors:
DAS, Madirakshi; LOUI, Alexander C.
Abstract:
A method of identifying groups of related digital images in a digital image collection, comprising: analyzing each of the digital images to generate associated feature descriptors related to image content or image capture conditions; storing the feature descriptors associated with the digital images in a metadata database; automatically analyzing the metadata database to identify a plurality of frequent itemsets, wherein each of the frequent itemsets is a co- occurring feature descriptor group that occurs in at least a predefined fraction of the digital images; determining a probability of occurrence for each the identified frequent itemsets; determining a quality score for each of the identified frequent itemsets responsive to the determined probability of occurrence; ranking the frequent itemsets based at least on the determined quality scores; and identifying one or more groups of related digital images corresponding to one or more of the top ranked frequent itemsets.
Assignee:
EASTMAN KODAK CO
Filing Date:
20 August 2012
Grant Date:
07 March 2013
Patent Classes:
Current International Class:
G06F0173000000
26.
Zhang, Lei; Wang, Xin-ing; Jing, Feng; Ma, Wei-Ying, "ANNOTATION BY SEARCH"
Inventors:
Zhang, Lei; Wang, Xin-ing; Jing, Feng; Ma, Wei-Ying
Abstract:
Annotation by search is described. In one aspect, a data store is searched for images that are semantically related to a baseline annotation of a given image and visually similar to the given image. The given image is then annotated with common concepts of annotations associated with at least a subset of the semantically and visually related images.
Assignee:
MICROSOFT CORP
Filing Date:
19 May 2006
Grant Date:
25 December 2012
Patent Classes:
Current U.S. Class:
707602000, 707001000, 707002000, 707003000, 707004000, 707005000, 707006000, 707007000, 707100000, 713180000
Current International Class:
G06F0070000
27.
Dunlop, Heather; Berry, Matthew G., "SYSTEMS AND METHODS FOR SEMANTICALLY CLASSIFYING SHOTS IN VIDEO"
Inventors:
Dunlop, Heather; Berry, Matthew G.
Abstract:
The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.
Assignee:
DIGITALSMITHS INC
Filing Date:
17 February 2009
Grant Date:
13 November 2012
Patent Classes:
Current U.S. Class:
382224000
Current International Class:
G06K0096200
28.
Bailloeul, Timothee; Zhu, Caizhi; Xu, Yinghul, "IMAGE LEARNING AUTOMATIC ANNOTATION RETRIEVAL METHOD AND DEVICE"
Inventors:
Bailloeul, Timothee; Zhu, Caizhi; Xu, Yinghul
Abstract:
A first image having annotations is segmented into one or more image regions. Image feature vectors and text feature vectors are extracted from all the image regions to obtain an image feature matrix and a text feature matrix. The image feature matrix and the text feature matrix are projected into a sub-space to obtain the projected image feature matrix and the text feature matrix. The projected image feature matrix and the text feature matrix are stored. First links between the image regions, second links between the first image and the image regions, third links between the first image and the annotations, and fourth links between the annotations are established. Weights of all the links are calculated. A graph showing a triangular relationship between the first image, image regions, and annotations is obtained based on all the links and the weights of the links.
Assignee:
RICOH CO LTD
Filing Date:
19 May 2009
Grant Date:
31 July 2012
Patent Classes:
Current U.S. Class:
345440000, 345419000, 345422000, 345427000, 345440100, 345440200, 345441000, 345442000, 382164000, 707728000, 707737000
Current International Class:
G06T0112000, G06F0151600
29.
Haveliwala, Taher; Gomes, Benedict; Singhal, Amitabh K., "USING GAME RESPONSES TO GATHER DATA"
Inventors:
Haveliwala, Taher; Gomes, Benedict; Singhal, Amitabh K.
Abstract:
A system provides images or questions to multiple game participants and receives labels or answers in response thereto. The system uses the labels or answers for various data gathering purposes.
Assignee:
GOOGLE INC
Filing Date:
29 June 2005
Grant Date:
04 October 2011
Patent Classes:
Current U.S. Class:
707602000, 706020000, 706021000, 706025000, 707603000
Current International Class:
G06F0070000, G06F0170000, G06E0010000, G06E0030000, G06F0151800, G06G0070000, G06N0030800
30.
von Ahn, Luis; Liu, Ruoran; Blum, Manuel; Efros, Alexei A.; Veloso, Maria Manuela, "METHOD APPARATUS AND SYSTEM FOR OBJECT RECOGNITION OBJECT SEGMENTATION AND KNOWLEDGE ACQUISITION"
Inventors:
von Ahn, Luis; Liu, Ruoran; Blum, Manuel; Efros, Alexei A.; Veloso, Maria Manuela
Abstract:
A method, comprising displaying an image to a first player, displaying a portion of the image to a second player wherein the portion of the image displayed to the second player is less than all of the image and wherein the portion of the image displayed to the second player is determined by an action of the first player, allowing the second player to submit a word, and determining whether the word submitted by the second player is related to the image. The present invention also includes apparatuses and systems.
Assignee:
CARNEGIE MELLON UNIVERSITY
Filing Date:
14 July 2006
Grant Date:
31 August 2010
Patent Classes:
Current U.S. Class:
463009000
Current International Class:
A63F0090000